Search Results

Search found 11768 results on 471 pages for 'railstutorial org'.

Page 145/471 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • unable to load library at runtime in android application

    - by Addy
    Hi. I m working on android application in which I used JNI for native c code. I build this application on android 2.0 version and ndkr3. and its work fine. Now when I changed the android sdk version 1.5 and api version 3 I faced problem of unable to open library libtest_demo.so. 05-13 16:54:23.603: INFO/dalvikvm(1211): Unable to dlopen(/data/data/org.abc.test_demo/lib/libtest_demo.so): Cannot find library I put the libtest_demo.so file at the same place /data/data/org.abc.test_demo/lib/libtest_demo.so but still problem arise. please help me..

    Read the article

  • Change spring bean properties at configuration time

    - by Nick Gerakines
    In a spring servlet xml file, I'm using org.springframework.scheduling.quartz.SchedulerFactoryBean to regularly fire a set of triggers. <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean"> <property name="triggers"> <list> <ref local="AwesomeTrigger" /> <ref local="GreatTrigger" /> <ref local="FantasticTrigger"/> </list> </property> </bean> The issue is that in different environments, I don't want certain triggers firing. Is there a way to include some sort of configuration or variable defined either in my build.properties for the environment or in a spring custom context properties file that assists the bean xml to determine which triggers should be included in the list? That way, for example, AwesomeTrigger would be called in development but not qa.

    Read the article

  • Git to svn: Adding commit date to log messages

    - by Arnauld VM
    How should I do to have the author (or committer) name/date added to the log message when "dcommitting" to svn? For example, if the log message in Git is: This is a nice modif I'd like to have the message in svn be something like: This is a nice modif ----- Author: John Doo <[email protected] 2010-06-10 12:38:22 Committer: Nice Guy <nguy@acme.org 2010-06-10 14:05:42 (Note that I'm mainly interested in the date, since I already mapped svn users in .svn-authors) Any simple way? Hook needed? Other suggestion? (See also: http://article.gmane.org/gmane.comp.version-control.git/148861) Thank you in advance. Yours faithfully, -- Arnauld Van Muysewinkel

    Read the article

  • Persistence unit is not persistent

    - by etam
    I need persistence unit that creates embedded database which stays persistent after closing EntityManager. This is my PU: <persistence-unit name="hello-jpa" transaction-type="RESOURCE_LOCAL"> <class>hello.jpa.User</class> <properties> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.format_sql" value="true"/> <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/> <property name="hibernate.connection.driver_class" value="org.hsqldb.jdbcDriver"/> <property name="hibernate.connection.username" value="sa"/> <property name="hibernate.connection.password" value=""/> <property name="hibernate.connection.url" value="jdbc:hsqldb:target/hsql.db"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> And it deletes data after closing application.

    Read the article

  • code snippet works when procedural, but doesn't when converted to modular

    - by Delirium tremens
    function sc_HTMLParser(aHTMLString){ var parseDOM = content.document.createElement('div'); parseDOM.appendChild(Components.classes['@mozilla.org/feed-unescapehtml;1'] .getService(Components.interfaces.nsIScriptableUnescapeHTML) .parseFragment(aHTMLString, false, null, parseDOM)); return parseDOM; } becomes this.HTMLParser = function(aHTMLString){ var parseDOM = content.document.createElement('div'); parseDOM.appendChild(Components.classes['@mozilla.org/feed-unescapehtml;1'] .getService(Components.interfaces.nsIScriptableUnescapeHTML) .parseFragment(aHTMLString, false, null, parseDOM)); return parseDOM; } and searchcontents = req.responseText; parsedHTML = sc_HTMLParser(searchcontents); sitefound = sc_sitefound(compareuris, parsedHTML); becomes searchcontents = req.responseText; alert(searchcontents); parsedHTML = this.HTMLParser(searchcontents); alert(parsedHTML); sitefound = this.sitefound(compareuris, parsedHTML); The modular code alerts the search contents, but doesn't alert the parsedHTML. Why? How to solve?

    Read the article

  • Is it possible to put myBatis (iBatis) xml mappers outside the project?

    - by kospiotr
    According to the user guide i am able to use file path instead of resource: // Using classpath relative resources <mappers> <mapper resource="org/mybatis/builder/AuthorMapper.xml"/> </mappers> // Using url fully qualified paths <mappers> <mapper url="file:///var/sqlmaps/AuthorMapper.xml"/> </mappers> in my project I'm trying to put my mapper xml "outside" the project and i'm doing this: <mapper url="file://D:/Mappers/ComponentMapper1.xml" /> The output of my log4j console: Error building SqlSession. The error may exist in file://D:/Mappers/ComponentMapper1.xml Cause: org.apache.ibatis.builder.BuilderException: Error parsing SQL Mapper Configuration. Cause: java.net.UnknownHostException: D Is it bug or it's me doing something wrong?

    Read the article

  • Jetty ant task hangs in build

    - by Kate Ansolis
    I have a problem when I run Jetty task with my war file. Here is my output: [jetty] Configuring Jetty for project: Guardian [jetty] 2010-08-23 18:53:09.062:INFO::Logging to STDERR via org.mortbay.log.StdErrLog [jetty] [jetty] Configuring Jetty for web application: project [jetty] Webapp source directory = C:\Projects\GUARDIAN\build\dist\project.war [jetty] Context path = / [jetty] Classpath = [] [jetty] Default scanned paths = [] [jetty] Extra scan targets = [] [jetty] Temp directory = C:\jettyTemp\ [jetty] 2010-08-23 18:53:09.391:INFO::jetty-6.1.25 [jetty] 2010-08-23 18:53:09.481:INFO::Extract C:\Projects\GUARDIAN\build\dist\project.war to C:\jettyTemp\webapp [jetty] 2010-08-23 18:53:13.810:INFO::NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet [jetty] 2010-08-23 18:53:13.909:INFO::No Transaction manager found - if your webapp requires one, please configure one. [jetty] 2010-08-23 18:53:18.038:INFO::Started [email protected]:8080 and it hangs forever. What can I do about it? The goal is to start jetty with this war file so I can continue testing.

    Read the article

  • NonUniqueObjectException during DAO integration test?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts my DAO integration tests are failing with org.hibernate.NonUniqueObjectException. I do tend to re-use the same object (same ID even) over and over for all the different tests and I am sure that is the cause, but I can see in the logs that Spring Test and Atomikos are clearly rolling back the transaction associated with each test method. I would have thought the rollback would have also cleared the persistence context too. On a hunch, I added an a call to dao.clear() at the beginning of the faulty test methods and the problem went away!! Rollback doesn't clear the persistence context...hmmm.... Not sure if this is relevant, but I see a possible autocommit setting problem in the log file: [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows Because I am using connection pooling, I figure that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • how to get child nodes in xsl

    - by ppp
    here my code- <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="ArrayOfLinkEntity" name="bindLink"> <ul> <xsl:for-each select="LinkEntity[ParentLinkId=0]"> <li> <xsl:variable name="linkId" select="LinkId"/> <xsl:variable name="child" select="count(/ArrayOfLinkEntity/LinkEntity[ParentLinkId=$linkId])"/> <xsl:value-of select="$child"/> <xsl:choose> <xsl:when test="($child &gt; 0)"> <a href="#" data-flexmenu="flexmenu1" onclick="javascript:setPageLinkId({$linkId});"> <xsl:value-of select="LinkTitle"/> <img src="../images/down.gif" border="0"/> </a> </xsl:when> <xsl:otherwise > <a href="#" onclick="javascript:setPageLinkId({$linkId});"> <xsl:value-of select="LinkTitle"/> </a> </xsl:otherwise> </xsl:choose> </li> </xsl:for-each> </ul> </xsl:template> </xsl:stylesheet> but I am getting $child=0 always.but there exists children. my xml structure- <ArrayOfLinkEntity xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <LinkEntity> <EntityId>00000000-0000-0000-0000-000000000000</EntityId> <LinkId>1</LinkId> <SequenceNo>1</SequenceNo> <ParentLinkId>0</ParentLinkId> <LinkTitle>Home</LinkTitle> <SubLink /> </LinkEntity> ... </ArrayOfLinkEntity> What should I do? Please suggest.

    Read the article

  • [SWT/RCP] Alpha blending is slow on linux

    - by elgcom
    we are developing an SWT/RCP(Eclipse 3.5) application on both Windows and Linux (on identical hardware). The application is a GIS app which shows several layered maps(PNG images) rendered with alpha blending. org.eclipse.draw2d.Graphics.setAlpha(...); org.eclipse.draw2d.Graphics.drawImage(...); On Windows the performance is pretty good, but on Linux it is very poor. is that a Linux(GTK/KDE) problem? or is there any workaround to improve the performance on Linux?

    Read the article

  • Regular expression only for website

    - by Katie
    HI, I'm new to Regular Expression. I need to find just website in some text and I'm looking for a regular expression able to find out strings like: www.my.home, http://my.site.it But this regular expression should not find strings like: [email protected] or if the website is already inside html tag <a href="http://www.my.site.com/"><span style="font-style: normal;">www.mambo-test.org</span></a> I tried with this one: \b((https?://[^ ])|(www.[^ ])) but it also finds the website in the href and between the tag: <a href="http://www.my.site.com/"><span style="font-style: normal;">www.mambo-test.org</span></a> and I don't know how except this case.

    Read the article

  • Why is my JavaScript Twitter feed not working in Internet Explorer?

    - by JAG2007
    We're rolling out a redesign of helpcurenow.org, and we've implemented a Twitter feed in the footer. (I'm the design & front end guy, my coworker is the scripting & backend guy). All is well with the Twitter feed in all major browsers except internet explorer, version 8 and later. However we have no clue why IE is not pulling the feed at all. Any hints?? http://betawww.helpcurenow.org/ (look in footer)

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • Python: Converting a tuple to a string with 'err'

    - by skylarking
    Given this : import os import subprocess def check_server(): cl = subprocess.Popen(["nmap","10.7.1.71"], stdout=subprocess.PIPE) result = cl.communicate() print result check_server() check_server() returns this tuple: ('\nStarting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:26 EDT\nInteresting ports on 10.7.1.71:\nNot shown: 1711 closed ports\nPORT STATE SERVICE\n21/tcp open ftp\n22/tcp open ssh\n80/tcp open http\n\nNmap done: 1 IP address (1 host up) scanned in 0.293 seconds\n', None) Changing the second line in the method to result, err = cl.communicate() results in check_server() returning : Starting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:27 EDT Interesting ports on 10.7.1.71: Not shown: 1711 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.319 seconds Looks to be the case that the tuple is converted to a string, and the \n's are being stripped.... but how? What is 'err' and what exactly is it doing?

    Read the article

  • Socket error in python

    - by Alice Everett
    I am using python-monetdb 11.16.0.7. I created my database farm and database according to instructions given below (source: http://www.monetdb.org/Documentation/monetdbd) % monetdbd start /home/my-dbfarm % monetdb create my-first-db Then I tried to connect to the database using the below mentioned command in python(https://pypi.python.org/pypi/python-monetdb/). Upon doing so I am getting the below mentioned error: >import monetdb.sql >connection=monetdb.sql.connect(username="monetdb",password="monetdb",hostname="localhost",database="my-first-db"); File "/usr/local/lib/python2.7/dist-packages/monetdb/sql/__init__.py", line 28, in connect return Connection(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/monetdb/sql/connections.py", line 58, in __init__ unix_socket=unix_socket) File "/usr/local/lib/python2.7/dist-packages/monetdb/mapi.py", line 93, in connect self.socket.connect((hostname, port)) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 111] Connection refused Can someone please help me with this?

    Read the article

  • WordPress front page (homepage) fails to redirect when static front page is set.

    - by Keyslinger
    I have configured WordPress to display a static front page as described here: http://codex.wordpress.org/Settings_Reading_SubPanel#Reading_Settings When save changes and try to visit my front page, my browser displays the following error: "The page isn't redirecting properly. Firefox has detected that the server is redirecting the request for this address in a way that will never complete." Disabling cookies does not remedy the situation. I'm using the theme, Constructor (http://wordpress.org/extend/themes/constructor), which I suspect may be contributing to the problem. How can I make WordPress properly display my front page?

    Read the article

  • DailyRollingFileHandler ---- Files should be rotated on a daily basis

    - by nag
    We have a requirment which requires to have an Handler that is extended from Java logging and allows to have the files rotated on daily basis. Currently Java util logging do have the support of rotation based on file size by using File Handler. It doesnt support rotation on daily basis. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6350749 Mentioned handler seems is not promising .http://www.x4juli.org/api/org/x4juli/handlers/RollingFileHandler.html So , what we are looking is for such an appender that allows daily rotation . We would like to write such handler and which is the appropriate handler to extend for ... StreamHandler or FileHandler ? And other questions are , is there way we can configure 2 different files for a single handler say FileHandler say for eg , we would like some kind of messages need to be captured in one file and other messages in other file. Would appreciate for any comments.

    Read the article

  • search map based on key wildcard

    - by Hugo Koopmans
    I have the following map of maps: <map:map xmlns:map="http://marklogic.com/xdmp/map" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <map:entry key="101201"> <map:value> "content" </map:value> </map:entry> ... more maps ... Now I would like to search/filter the map based on the key using a wildcard. Actually I want to filter based on the first 4 characters of the key="101201" so key="1012**". Question: Give me all maps that have a key that is matching '1012*' ... Can that be done efficiently? tx for your time hugo

    Read the article

  • How to disable JSR-303 Hibernate Validation in Spring3

    - by Pinchy
    After putting hibernate-validator.jar and javax.validation-api.jar in my classpath the org.springframework.dao.DataIntegrityViolationException is replaced by org.hibernate.exception.ConstraintViolationException and this is causing a lot of issues. I have to put this two jars to be able to upgrade Jersey to 2.4, it has dependency on these two jars. Putting these properties into hibernate.properties file doesn't help, hibernate simply ignores them but it loads the properties on start-up loaded properties from resource hibernate.properties: {hibernate.validator.apply_to_ddl=false,hibernate.validator.autoregister_listeners=false etc} javax.persistence.validation.mode=none hibernate.validator.autoregister_listeners=false hibernate.validator.apply_to_ddl=false I am using Spring 3.2.4 with SessionFactory and mapping resources from hbm.xml files with constraints in it, hibernate 3.6.9.final, hibernate-validator 5.0.final, javax.validator-api 1.1.0.Final I just can't figure out how to disable hibernate validation, any help will be much appreciated.

    Read the article

  • Can I improve this regex check for valid domain names?

    - by Josh
    So, I have been working on this domain name regular expression. So far, it seems to pick up domain names with SLDs and TLDs (with the optional ccTLD), but there is duplication of the TLD listing. Can this be refactored any further? params[:domain_name].downcase.strip.match(/^[a-z0-9\-]{2,63} \.((a[cdefgilmnoqrstuwxz]|aero|arpa)|(b[abdefghijmnorstvwyz]|biz)| (c[acdfghiklmnorsuvxyz]|cat|com|coop)|d[ejkmoz]|(e[ceghrstu]|edu)|f[ijkmor]| (g[abdefghilmnpqrstuwy]|gov)|h[kmnrtu]|(i[delmnoqrst]|info|int)| (j[emop]|jobs)|k[eghimnprwyz]|l[abcikrstuvy]| (m[acdghklmnopqrstuvwxyz]|me|mil|mobi|museum)|(n[acefgilopruz]|name|net)|(om|org)| (p[aefghklmnrstwy]|pro)|qa|r[eouw]|s[abcdeghijklmnortvyz]| (t[cdfghjklmnoprtvwz]|travel)|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]) (\.((a[cdefgilmnoqrstuwxz]|aero|arpa)|(b[abdefghijmnorstvwyz]|biz)| (c[acdfghiklmnorsuvxyz]|cat|com|coop)|d[ejkmoz]|(e[ceghrstu]|edu)|f[ijkmor]| (g[abdefghilmnpqrstuwy]|gov)|h[kmnrtu]|(i[delmnoqrst]|info|int)| (j[emop]|jobs)|k[eghimnprwyz]|l[abcikrstuvy]| m[acdghklmnopqrstuvwxyz]|mil|mobi|museum)| (n[acefgilopruz]|name|net)|(om|org)| (p[aefghklmnrstwy]|pro)|qa|r[eouw]|s[abcdeghijklmnortvyz]| (t[cdfghjklmnoprtvwz]|travel)|u[agkmsyz]|v[aceginu]|w[fs]|y[etu]|z[amw]))?$/)

    Read the article

  • PERL XPath Parser Help

    - by cognvision
    I want to pull in data using a XML::XPath parser from a XML DB file from the Worldbank site. The problem is that I'm not seeing any results in the output. I must be missing something in the code. Ideally, I would like to extract just the death rate statistics from each country XML DB (year and value). I'm using this as part of my input: http://data.worldbank.org/sites/default/files/countries/en/afghanistan_en.xml use strict; use LWP 5.64; use HTML::ContentExtractor; use XML::XPath; my $agent1 = LWP::UserAgent->new; my $extractor = HTML::ContentExtractor->new(); #Retrieve main Worldbank country site my $mainlink = "http://data.worldbank.org/country/"; my $page = $agent1->get("$mainlink"); my $fulltext = $page->decoded_content(); #Match to just all available countries in Worldbank my $country = ""; my @countryList; if (@countryList = $fulltext =~ m/(http:\/\/data\.worldbank\.org\/country\/.*?")/gi){ foreach $country(@countryList){ #Remove " at the end of link $country=~s/\"//gi; print "\n" . $country; #Retrieve each country profile's XML DB file my $page = $agent1->get("$country"); my $fulltext = $page->decoded_content(); my $XML_DB = ""; my @countryXMLDBList; if (@countryXMLDBList = $fulltext =~ m/(http:\/\/data\.worldbank\.org\/sites\/default\/files\/countries\/en\/.*?\.xml)/gi){ foreach $XML_DB(@countryXMLDBList){ my $page = $agent1->get("$XML_DB"); my $fulltext = $page->decoded_content(); #print $fulltext; #Use XML XPath parser to find elements related to death rate my $xp = XML::XPath->new($fulltext); #my $xp = XML::XPath->new("afghanistan_en.xml"); my $nodeSet = $xp->find("//*"); if (!$nodeSet->isa('XML::XPath::NodeSet') || $nodeSet->size() == 0) { #No match found print "\nMatch not found!"; exit; } else { foreach my $node ($nodeSet->get_nodelist){ print "\n" . $node->find('country')->string_value; print "\n" . $node->find('indicator')->string_value; print "\n" . $node->find('year')->string_value; print "\n" . $node->find('value')->string_value; exit; } } } #Build line graph based on death rate statistics and output some image file format } } } I am also looking into using the xpath expression "following-sibling", but not sure how to use it correctly. For example, I have the following set of XML data where I am only interested in pulling siblings directly after the indicator for just death rate data. <data> <country id="AFG">Afghanistan</country> <indicator id="SP.DYN.CDRT.IN">Death rate, crude (per 1,000 people)</indicator> <year>2006</year> <value>20.3410000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="SP.DYN.CDRT.IN">Death rate, crude (per 1,000 people)</indicator> <year>2007</year> <value>19.9480000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="SP.DYN.CDRT.IN">Death rate, crude (per 1,000 people)</indicator> <year>2008</year> <value>19.5720000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="IC.EXP.DOCS">Documents to export (number)</indicator> <year>2005</year> <value>7.0000000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="IC.EXP.DOCS">Documents to export (number)</indicator> <year>2006</year> <value>12.0000000</value> </data> - <data> <country id="AFG">Afghanistan</country> <indicator id="IC.EXP.DOCS">Documents to export (number)</indicator> <year>2007</year> <value>12.0000000</value> </data> Any help would be much appreciated!!!

    Read the article

  • How to configure rime style in ICEfaces 3

    - by fresh_dev
    in icefaces 2 i was configuring rime style as follows: <h:head> <link href="./xmlhttp/css/xp/xp.css" rel="stylesheet" type="text/css"/> </h:head> <h:body styleClass="ice-skin-rime"> </h:body> <h:outputStylesheet library="org.icefaces.component.skins" name="rime.css" /> and i was wondering how to configure it in icefaces 3 because i tried the following and it doesn't work <context-param> <param-name>org.icefaces.ace.theme</param-name> <param-value>rime</param-value> </context-param> please advise thanks.

    Read the article

  • How can I launch a system command via Javascript in Google Chrome?

    - by kvsn
    I want to execute a local program on my computer via Javascript in Chrome. In Firefox, it can be done as follows (after setting 'signed.applets.codebase_principal_support' to true in about:config): function run_cmd(cmd, args) { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath(cmd); var process = Components.classes["@mozilla.org/process/util;1"] .createInstance(Components.interfaces.nsIProcess); process.init(file); process.run(false, args, args.length); } What's the equivalent code for Chrome?

    Read the article

  • NoClassDefFound error - Spring JDBC

    - by glowcoder
    Right now, I'm compiling my .class files in eclipse and moving them over to my %tomcat_home%\webapps\myapp\WEB-INF\classes directory. They compile just fine. I also have in the ...\classes directory a org.springframework.jdbc-3.0.2.RELEASE.jar which I have verified has the org.springframework.jdbc.datasource.DriverManagerDataSource class inside it. However, I get a NoClassDefFound error when I run my class and it tries to DriverManagerDataSource source = new DriverManagerDataSource(); I don't understand why it wouldn't be finding that jar. Any help is appreciated!

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >