Search Results

Search found 20846 results on 834 pages for 'current dir'.

Page 44/834 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How to make QCombobox painting item delegate for it's current Item? (Qt 4)

    - by r4nj33t
    QCombobox set Item delegate not painting for current Item.. I am trying to create a combo box showing different line types (Solid, Dotted, Dash etc). Currently i am setting item delegate for its content so as to draw/paint line type instead of displaying names. All line types are drawing currectly but as soon as i am selecting any line type from the combobox, the current index of combo box is displaying just the line name and not painting it. How can i make it paint the selected line type on the current combo box index?

    Read the article

  • sqlite is must for merb ?????

    - by mayank
    Hello All, I have a doubt regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have installed sqlite on my m/c . I tried this command "gem install merb" and faced following error. If is there any way to install merb with mysql please tell me. Thanks Mayank Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • sqlite is required for merb?

    - by mayank
    I have a question regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have sqlite installed on my m/c . I tried this command "gem install merb" and saw following error. If there any way to install merb with mysql please tell me. Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • MYSQL - Selecting a specific date range to get "current" popular screensavers.

    - by Joe
    Let's say I have a screensaver website. I want to display the CURRENT top 100 screensavers on the front page of the website. What I mean is, "RECENT" top 100 screensavers. What would be an example query to do this? My current one is: SELECT * FROM tbl_screensavers WHERE WEEK(tbl_screensavers.DateAdded) = WEEK('".date("Y-m-d H:i:s",strtotime("-1 week"))."') ORDER BY tbl_screensavers.ViewsCount, tbl_screensavers.DateAdded This will select the most viewed ("tbl_screensavers.ViewsCount") screensavers that were added ("tbl_screensavers.DateAdded") in the last week. However, in some cases there are no screensavers, or less than 100 screensavers, submitted in that week. So, how can I perform a query which would select "RECENT" top 100 screensavers? Hopefully you have an idea of what I'm try to accomplish when I say "RECENT" or "CURRENT" top screensavers. -- aka. the most viewed, recently - not the most viewed, all-time.

    Read the article

  • Can't install pg gem on Windows

    - by sNiCKY
    I've got 2 Ruby versions: 1.8.7 and 1.9.2 and PostgreSQL 8.3. I cant install pg gem on any of them. Getting this error: C:/Development/Ruby187/bin/ruby.exe extconf.rb checking for pg_config... yes not recorded checking for libpq-fe.h... no Can't find the 'libpq-fe.h header *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Development/Ruby187/bin/ruby --with-pg --without-pg --with-pg-config --without-pg-config --with-pg-dir --without-pg-dir --with-pg-include --without-pg-include=${pg-dir}/include --with-pg-lib --without-pg-lib=${pg-dir}/lib I know it's a common problem, but I haven't found any working solution yet... Oh, I have added C:\Program Files (x86)\PostgreSQL\8.3\bin to my PATH.

    Read the article

  • Can't install do_mysql gem?

    - by maccy1
    I'm trying to install the do_mysql on my Snow Leopord system Macbook Pro 13", but I keep getting this error: n216-160:~ myself$ sudo gem1.9 install do_mysql Password: Building native extensions. This could take a while... ERROR: Error installing do_mysql: ERROR: Failed to build gem native extension. /opt/local/bin/ruby1.9 extconf.rb checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/opt/local/bin/ruby1.9 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0 for inspection. Results logged to /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0/ext/do_mysql_ext/gem_make.out n216-160:~ myself$ I have no idea why. I also reinstalled my verison of MySQL with the MySQL 5.4.3 beta, 64-bit as others suggested but no dice. Does anyone have any idea what is wrong?

    Read the article

  • fastest way to crawl recursive ntfs directories in C++

    - by Peter Parker
    I have written a small crawler to scan and resort directory structures. It based on dirent(which is a small wrapper around FindNextFileA) In my first benchmarks it is surprisingy slow: around 123473ms for 4500 files(thinkpad t60p local samsung 320 GB 2.5" HD). 121481 files found in 123473 milliseconds Is this speed normal? This is my code: int testPrintDir(std::string strDir, std::string strPattern="*", bool recurse=true){ struct dirent *ent; DIR *dir; dir = opendir (strDir.c_str()); int retVal = 0; if (dir != NULL) { while ((ent = readdir (dir)) != NULL) { if (strcmp(ent->d_name, ".") !=0 && strcmp(ent->d_name, "..") !=0){ std::string strFullName = strDir +"\\"+std::string(ent->d_name); std::string strType = "N/A"; bool isDir = (ent->data.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) !=0; strType = (isDir)?"DIR":"FILE"; if ((!isDir)){ //printf ("%s <%s>\n", strFullName.c_str(),strType.c_str());//ent->d_name); retVal++; } if (isDir && recurse){ retVal += testPrintDir(strFullName, strPattern, recurse); } } } closedir (dir); return retVal; } else { /* could not open directory */ perror ("DIR NOT FOUND!"); return -1; } }

    Read the article

  • Why is HttpContext.Current.Session available in HttpModule but not in Response.Filter?

    - by Abtin Forouzandeh
    I have written an HttpModule that adds a response filter. The filter is capturing page output and storing it in a session variable. I am able to access HttpContext.Current.Session in my HttpModule. The HttpModule is handling the PostAcquireRequestState event. I am still able to access HttpContext.Current.Session in the PostAcquireRequestState event. I am adding a custom stream that inherits from Stream to Response.Filter HttpContext.Current.Session is null when accessed from the Stream.Write method. Everything worked fine when using an InProc SessionState. However, I now must use StateServer. Using StateServer, the code is now broken. Any ideas?

    Read the article

  • File.Move, why do i get a FileNotFoundException? The file exist...

    - by acidzombie24
    Its extremely weird since the program is iterating the file! outfolder and infolder are both in H:/ my external HD using windows 7. The idea is to move all folders that only contain files with the extention db and svn-base. When i try to move the folder i get an exception. VS2010 tells me it cant find the folder specified in dir. This code is iterating through dir so how can it not find it! this is weird. string []theExt = new string[] { "db", "svn-base" }; foreach (var dir in Directory.GetDirectories(infolder)) { bool hit = false; if (Directory.GetDirectories(dir).Count() > 0) continue; foreach (var f in Directory.GetFiles(dir)) { var ext = Path.GetExtension(f).Substring(1); if(theExt.Contains(ext) == false) { hit = true; break; } } if (!hit) { var dst = outfolder + "\\" + Path.GetFileName(dir); File.Move(dir, outfolder); //FileNotFoundException: Could not find file dir. } } }

    Read the article

  • TreeView Control Problem

    - by ProgNet
    Hi all, I have a public folder on the server that contains recursively nested sub folders. In the various Leaf folders contains Images. I wanted to create a server side file browser that will display the Images to the user. I am using the ASP.NET TreeView Control. I create the tree nodes using PopulateOnDemand. If the user click on a leaf directory I want the images in that folder to be displayed in a DataList Control. The problem is that when I click on a sub tree node (after I expanded it parent node) All the expanded sub tree disappears and only the parent node is showed with no + sign next to it !! ( I have set the TreeView's PopulateNodesFromClient property to true ) Can someone tell me what is the problem ?? Thanks Here is the code : <asp:TreeView ID="TreeView1" runat="server" AutoGenerateDataBindings="False" onselectednodechanged="TreeView1_SelectedNodeChanged" ontreenodepopulate="TreeView1_TreeNodePopulate"> </asp:TreeView> protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { string path = Server.MapPath("."); PopulateTopNodes(path); } } private void PopulateTopNodes(string pathToRootFolder) { DirectoryInfo dirInfo = new DirectoryInfo(pathToRootFolder); DirectoryInfo[] dirs = dirInfo.GetDirectories(); foreach (DirectoryInfo dir in dirs) { TreeNode folderNode = new TreeNode(dir.Name,dir.FullName); if (dir.GetDirectories().Length > 0) { folderNode.PopulateOnDemand = true; folderNode.Collapse(); } TreeView1.Nodes.Add(folderNode); } } protected void TreeView1_TreeNodePopulate(object sender, TreeNodeEventArgs e) { if (IsCallback == true) { if (e.Node.ChildNodes.Count == 0) { LoadChildNode(e.Node); } } } private void LoadChildNode(TreeNode treeNode) { DirectoryInfo dirInfo = new DirectoryInfo(treeNode.Value); DirectoryInfo[] dirs = dirInfo.GetDirectories(); foreach (DirectoryInfo dir in dirs) { TreeNode folderNode = new TreeNode(dir.Name, dir.FullName); if(dir.GetDirectories().Length>0){ folderNode.PopulateOnDemand = true; folderNode.Collapse(); } treeNode.ChildNodes.Add(folderNode); } } protected void TreeView1_SelectedNodeChanged(object sender, EventArgs e) { // Retrieve the images here }

    Read the article

  • running php script manually and getting back to the command line?

    - by Simpson88Keys
    I'm running a php script via command line, and it works just fine, except that when it's finished, it doesn't go back to the command line? Just sits there, so I never when it's done... This is the script: $conn_id = ftp_connect(REMOTE) or die("Couldn't connect to ".REMOTE); $login_result = ftp_login($conn_id, 'OMITTED','OMITTED'); if ((!$conn_id) || (!$login_result)) die("FTP Connection Failed"); $dir = 'download'; if ($dir != ".") { if (ftp_chdir($conn_id, $dir) == false) { echo ("Change Dir Failed: $dir<BR>\r\n"); return; } if (!(is_dir($dir))) mkdir($dir); chdir ($dir); } $contents = ftp_nlist($conn_id, "."); foreach ($contents as $file) { if ($file == '.' || $file == '..') continue; if (@ftp_chdir($conn_id, $file)) { ftp_chdir ($conn_id, ".."); ftp_sync ($file); } else ftp_get($conn_id, $file, $file, FTP_BINARY); } ftp_chdir ($conn_id, ".."); chdir (".."); ftp_close($conn_id);

    Read the article

  • How can I find the current edit slide in PowerPoint?

    - by mj2008
    I'm writing an add-in for PowerPoint, and want to get the text from the current slide in the editing window. The following works, but only when the slide is selected in the slide selector pane. xSelection := PowerPointApp.ActiveWindow.Selection; if xSelection.Type = ppSelectionSlides then begin xSlide := xSelection.SlideRange.Item(1); end; I've been chasing my tail at MSDN trying to work out what the correct way to find out the current slide is. The DocumentWindow doesn't seem to have a current slide.

    Read the article

  • When parsing XML with jquery, how can we pass the current node from within each() to another functio

    - by Johusa
    Say we have an XML document with many book nodes... When parsing XML with jquery, how can i pass the current node from each() iteration to another function that will do some stuff until something is reached and then go back to the previous function (passing along the current node from this function back to the first function)? Here something more descriptive (this is just an example out of my head, not accurate): function MyParser(x1,x2,dom) { // if i am called by anotherFunction(thisNode) proceed from the passed node dom.find('book').each(function() { var Letter = thisNode.find(author).charAt(0); if(x1 == Letter) { // print everything till the next letter (x2) anotherFunction(thisNode) } } } function anotherFunction(x2,thisNode) { //continue parsing here until you reached x2 //when x2 is reached, return to previous function passing again the current node }

    Read the article

  • The frame buffer layout of the current display cannot be made to match...

    - by adambox
    I get this error when I have VM running in VMWare Workstation on my work PC and try Remote Desktop-ing in from my iMac at home. I just upgraded VMWare Workstation to 7.0 from 6.0 and now I'm getting it when I try to resume my VM at work. It then asks me the scary question of whether I want to preserve or discard the suspended state. I don't want to lose stuff! ack! Update I backed up my VM and tried hitting that "discard" button and the result was a reboot of the VM. I then tried restoring to a snapshot, and none of my snapshots work! Is there anyway to fix this so I can run 7 but still have my old snapshots? The frame buffer layout of the current display cannot be made to match the frame buffer layout stored in the snapshot. The dimensions of the frame buffer in the snapshot are: Max width 3200, Max height 1770, Max size 22659072. The dimensions of the frame buffer on the current display are: Max width 3200, Max height 1600, Max size 20512768. Error encountered while trying to restore the virtual machine state from file "C:\Documents and Settings\adam\My Documents\My Virtual Machines\dev\Windows XP Professional.vmss". What do I do so I don't break things horribly?

    Read the article

  • Is current SATA 6 gb/s equipment simply unreliable?

    - by korkman
    I have a 45-disk array of Seagate Barracuda 3 TB ST3000DM001 (yes these are desktop drives I'm aware of that) in a Supermicro sc847 JBOD, connected via LSI 9285. I have found a solution for the problem description below by reducing speed via MegaCli -PhySetLinkSpeed -phy0 2 -a0; for i in $(seq 48); do MegaCli -PhySetLinkSpeed -phy${i} 2 -a0; done and rebooting. The question remains: Is this typical for current 6 gb/s equipment? Is this the sad state of SATA storage? Or is some of my equipment (the sff-8088 cables come to mind) bad? The Problem was: Synchronizing HW RAID-6, disks kept offlining. Fetching SMART values reveiled that those which offlined did not increase powered-on hours anymore. That is, their firmware (CC4C) seems to crash. Digging into the matter by switching to Software RAID-6, with the disks passed-through, I got tons of kernel messages scattered across all disks, with 6 gb/s: sd 0:0:9:0: [sdb] Sense Key : No Sense [current] Info fld=0x0 sd 0:0:9:0: [sdb] Add. Sense: No additional sense information And finally, when a disk offlines: megasas: [ 5]waiting for 160 commands to complete ... megasas: [35]waiting for 159 commands to complete ... megasas: [155]waiting for 156 commands to complete ... megaraid_sas: pending commands remain after waiting, will reset adapter. Ugly controller reset here, then minutes later: megaraid_sas: Reset successful. sd 0:0:28:0: Device offlined - not ready after error recovery ... sd 0:0:28:0: [sdu] Unhandled error code sd 0:0:28:0: [sdu] Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK sd 0:0:28:0: [sdu] CDB: Read(10): 28 00 23 21 2f 40 00 00 70 00 sd 0:0:28:0: [sdu] killing request Reduced speed to 3 gb/s like written above, all problems vanished.

    Read the article

  • Outlook error-'Can't open e-mail folders.You must connect to Exchange w/ current profile before you can sync folders w/ your Outlook data file'

    - by Emilio
    Note that the error message I put in the title of this question is abbreviated. The actual error message is below. I have an Exchange account and using Outlook 2010 as the client. I run in Cached Exchange Mode and have an .OST file locally. Recently I uninstalled and reinstalled office. I set up a new mail profile when prompted by Outlook 2010 upon first execution of the program. In my initial attempt, I pointed the data file at my existing OST file. In my second attempt, I had Outlook create a fresh empty file. In both cases I'm getting the error 'Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost).', also shown in this screenshot -- http://drop.io/4rc9v9o/asset/outlook-error-png. I don't know how to connect with the current profile - that's what I thought I did when I created a new .OST file? I've had this problem for several days so my OST file is now out of date. Once I get things running I obviously want my active mailbox to update the OST, not the other way around.

    Read the article

  • Can Current Backflow from Powered Hub's Adapter & cause PC Damage?

    - by SuperUserMan
    Getting this short: Can current flow from a powered USB hub's power adapter (lying 10 Meter away) back to computer via usb port and cause damage to Computer components like mobo, etc? What should be my concerns? Using a 2 Amp 5V Power adapter to power a 10m Long Active Repeater USB extension cable with 4 port HUB & plugging into PC's Front port, causes PC Chassis fan to keep running (thought slower than regular speed) Front Chassis HDD & power LED to turn on (though bit dim) may be other things which i cant detect/see at chip level, in motherboard?? All this even after PC is shut down (bit scary) More detail (in case still want to read): To run 4 High power (needing 450 mAmps) Wifi Adapters, far away from PC, Bought Active Repeater USB Extension Cable with 4 Ports & power port at far end http://www.ebay.com/itm/33FT-USB-2-0-Male-to-Female-Extension-Cable-Hub-Splitter-Adapter-with-4-USB-Port-/390846115254 Then added a locally bought 2 Amp 240V AC to 5V DC Power Adapter and plugged into USB hub which is a part of & situated at far end of a 10 Meter Active Repeater usb extension cable. Even 4 Wifi Adapters run fine (appear to) using this setup, but running chassis fan, dimly lighted Power & HDD LED, even when PC is switched off is bit scary and surely mean 5V & some current is flowing all though that 10 meter extension cable into my USB port & powering stuff. Can this cause damage? and what should be my concerns. Of course I can't switch off the power adapter (lying 10 meters away from PC) every time I switch off my PC to prevent this.

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • Use JAXB unmarshalling in Weblogic Server

    - by Leo
    Especifications: - Server: Weblogic 9.2 fixed by customer. - Webservices defined by wsdl and xsd files fixed by customer; not modifications allowed. Hi, In the project we need to develope a mail system. This must do common work with the webservice. We create a Bean who recieves an auto-generated class from non-root xsd element (not wsdl); this bean do this common work. The mail system recieves a xml with elements defined in xsd file and we need to drop this elements info to wsdlc generated classes. With this objects we can use this common bean. Is not possible to redirect the mail request to the webservice. We've looking for the code to do this with WL9.2 resources but we don't found anything. At the moment we've tried to use JAXB for this unmarshalling: JAXBContext c = JAXBContext.newInstance(new Class[]{WasteDCSType.class}); Unmarshaller u = c.createUnmarshaller(); WasteDCSType w = u.unmarshal(waste, WasteDCSType.class).getValue(); waste variable is an DOM Element object. It isn't the root element 'cause the root isn't included in XSD First we needed to add no-arg constructor in some autogenerated classes. No problem, we solved this and finally we unmarshalled the xml without error Exceptions. But we had problems with the attributes. The unmarshalling didn't set attributes; none of them in any class, not simple attributes, not large or short enumeration attributes. No problem with xml elements of any type. We can't create the unmarshaller from "context string" (the package name) 'cause not ObjectFactory has been create by wsldc. If we set the schema no element descriptions are founded and unmarshall crashes. This is the build content: <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" /> <taskdef name="wsdlc" classname="weblogic.wsee.tools.anttasks.WsdlcTask"/> <target name="generate-from-wsdl"> <wsdlc srcWsdl="${src.dir}/wsdls/e3s-environmentalMasterData.wsdl" destJwsDir="${src.dir}/webservices" destImplDir="${src.dir}/webservices" packageName="org.arc.eterws.generated" /> <wsdlc srcWsdl="${src.dir}/wsdls/e3s-waste.wsdl" destJwsDir="${src.dir}/webservices" destImplDir="${src.dir}/webservices" packageName="org.arc.eterws.generated" /> </target> <target name="webservices" description=""> <jwsc srcdir="${src.dir}/webservices" destdir="${dest.dir}" classpathref="wspath"> <module contextPath="E3S" name="webservices"> <jws file="org/arc/eterws/impl/IE3SEnvironmentalMasterDataImpl.java" compiledWsdl="${src.dir}/webservices/e3s-environmentalMasterData_wsdl.jar"/> <jws file="org/arc/eterws/impl/Ie3SWasteImpl.java" compiledWsdl="${src.dir}/webservices/e3s-waste_wsdl.jar"/> <descriptor file="${src.dir}/webservices/META-INF/web.xml"/> </module> </jwsc> </target> My questions are: How Weblogic "unmarshall" the xml with the JAX-RPC tech and can we do the same with a xsd element? How can we do this if yes? If not, Exists any not complex solution to this problem? If not, must we use XMLBean tech. or regenerate the XSD with JAXB tech.? What is the best solution? NOTE: There are not one single xsd but a complex xsd structure in fact.

    Read the article

  • Filling in gaps for outlines

    - by user146780
    I'm using an algorithm to generate quads. These become outlines. The algorithm is: void OGLENGINEFUNCTIONS::GenerateLinePoly(const std::vector<std::vector<GLdouble>> &input, std::vector<GLfloat> &output, int width) { output.clear(); if(input.size() < 2) { return; } int temp; float dirlen; float perplen; POINTFLOAT start; POINTFLOAT end; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; POINTFLOAT p0, p1, p2, p3; for(unsigned int i = 0; i < input.size() - 1; ++i) { start.x = static_cast<float>(input[i][0]); start.y = static_cast<float>(input[i][1]); end.x = static_cast<float>(input[i + 1][0]); end.y = static_cast<float>(input[i + 1][1]); dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * width * 0.5); perpoffset.y = static_cast<float>(nperp.y * width * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); // p0 = start + perpoffset - diroffset //p1 = start - perpoffset - diroffset //p2 = end + perpoffset + diroffset // p3 = end - perpoffset + diroffset p0.x = start.x + perpoffset.x - diroffset.x; p0.y = start.y + perpoffset.y - diroffset.y; p1.x = start.x - perpoffset.x - diroffset.x; p1.y = start.y - perpoffset.y - diroffset.y; p2.x = end.x + perpoffset.x + diroffset.x; p2.y = end.y + perpoffset.y + diroffset.y; p3.x = end.x - perpoffset.x + diroffset.x; p3.y = end.y - perpoffset.y + diroffset.y; output.push_back(p2.x); output.push_back(p2.y); output.push_back(p0.x); output.push_back(p0.y); output.push_back(p1.x); output.push_back(p1.y); output.push_back(p3.x); output.push_back(p3.y); } } The problem is that there are then gaps as seen here: http://img816.imageshack.us/img816/2882/eeekkk.png There must be a way to fix this. I see a pattern but I just cant figure it out. There must be a way to fill the missing inbetweens. Thanks

    Read the article

  • Edges on polygon outlines not always correct

    - by user146780
    I'm using the algorithm below to generate quads which are then rendered to make an outline like this http://img810.imageshack.us/img810/8530/uhohz.png The problem as seen on the image, is that sometimes the lines are too thin when they should always be the same width. My algorithm finds the 4 verticies for the first one then the top 2 verticies of the next ones are the bottom 2 of the previous. This creates connected lines, but it seems to not always work. How could I fix this? This is my algorithm: void OGLENGINEFUNCTIONS::GenerateLinePoly(const std::vector<std::vector<GLdouble>> &input, std::vector<GLfloat> &output, int width) { output.clear(); if(input.size() < 2) { return; } int temp; float dirlen; float perplen; POINTFLOAT start; POINTFLOAT end; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; POINTFLOAT p0, p1, p2, p3; for(unsigned int i = 0; i < input.size() - 1; ++i) { start.x = static_cast<float>(input[i][0]); start.y = static_cast<float>(input[i][1]); end.x = static_cast<float>(input[i + 1][0]); end.y = static_cast<float>(input[i + 1][1]); dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * width * 0.5); perpoffset.y = static_cast<float>(nperp.y * width * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); // p0 = start + perpoffset - diroffset //p1 = start - perpoffset - diroffset //p2 = end + perpoffset + diroffset // p3 = end - perpoffset + diroffset p0.x = start.x + perpoffset.x - diroffset.x; p0.y = start.y + perpoffset.y - diroffset.y; p1.x = start.x - perpoffset.x - diroffset.x; p1.y = start.y - perpoffset.y - diroffset.y; if(i > 0) { temp = (8 * (i - 1)); p2.x = output[temp + 2]; p2.y = output[temp + 3]; p3.x = output[temp + 4]; p3.y = output[temp + 5]; } else { p2.x = end.x + perpoffset.x + diroffset.x; p2.y = end.y + perpoffset.y + diroffset.y; p3.x = end.x - perpoffset.x + diroffset.x; p3.y = end.y - perpoffset.y + diroffset.y; } output.push_back(p2.x); output.push_back(p2.y); output.push_back(p0.x); output.push_back(p0.y); output.push_back(p1.x); output.push_back(p1.y); output.push_back(p3.x); output.push_back(p3.y); } } Thanks

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • formcollection not see me all inputs

    - by sanfra1983
    Hi, I have a View where some input text to be added dynamica using jquery, I mean everything funzona, and when I go to add these inputs and do right button on the browser I'm not seeing the added input. function addPerson () ( current + +; StrToAdd var = '<table id="compo" name="compo"' + current +'> <tr> <td> <label for="firstname"' + current +'"> Name </ label> <input id = 'firstname' + current + '"name =" Componenti.Nome_ "' +" "+ current +" "+ '" size = "29" /> </ td> <td> <label for = "lastname"' current + + '"> Name </ label> <input id="lastname''" name="Componenti.Cognome_"' + + + current + current'" size="29" /> </ td>' StrToAdd + = '<td> <label for="luogonascita"' + current +'"> LuogoNascita </ label> <input id = "luogodinascita' + current + '" name = "Componenti.Luogonascita_"' + "" + current + "" + '"size =" 29 "/> </ td>' StrToAdd + = '<td> <label for="datanascita"' + current +'"> DataNacita </ label> <input id = "dateOfBirth' + current + '" name = "Componenti.datanascita_"' + current + ' "size =" 29 "/> </ td> </ tr> </ table> ' StrToAdd + = '<script type="text/javascript"> jQuery (function ($) {$("# dateOfBirth' + current + '). mask ("' + mask +'")});</ script > '; $ ('# Components'). Append (StrToAdd); ) The problem is that when I pass the data via post in the action, and I go to create the education var valueProvider= formanagrafica.ToValueProvider(); valueProvider I find all the input I have added only one and that therefore there are more than one gives me the values separated by commas. How can I retrieve the values of a line of input text? I hope I explained correctly.

    Read the article

  • Java: How to get the first day of the current week and month?

    - by cesarlinux
    Hello, I have the date of several events expressed in milliseconds*, and I want to know which events are inside the current week and the current month, but I can't realize how to obtain the first day (day/month/year) of the running week and convert it to milliseconds, the same for the first day of the month. *Since January 1, 1970, 00:00:00 GMT Thanks in advance.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >