Search Results

Search found 16602 results on 665 pages for 'directory'.

Page 637/665 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • java.util.zip - ZipInputStream v.s. ZipFile

    - by lucho
    Hello, community! I have some general questions regarding the java.util.zip library. What we basically do is an import and an export of many small components. Previously these components were imported and exported using a single big file, e.g.: <component-type-a id="1"/> <component-type-a id="2"/> <component-type-a id="N"/> <component-type-b id="1"/> <component-type-b id="2"/> <component-type-b id="N"/> Please note that the order of the components during import is relevant. Now every component should occupy its own file which should be externally versioned, QA-ed, bla, bla. We decided that the output of our export should be a zip file (with all these files in) and the input of our import should be a similar zip file. We do not want to explode the zip in our system. We do not want opening separate streams for each of the small files. My current questions: Q1. May the ZipInputStream guarantee that the zip entries (the little files) will be read in the same order in which they were inserted by our export that uses ZipOutputStream? I assume reading is something like: ZipInputStream zis = new ZipInputStream(new BufferedInputStream(fis)); ZipEntry entry; while((entry = zis.getNextEntry()) != null) { //read from zis until available } I know that the central zip directory is put at the end of the zip file but nevertheless the file entries inside have sequential order. I also know that relying on the order is an ugly idea but I just want to have all the facts in mind. Q2. If I use ZipFile (which I prefer) what is the performance impact of calling getInputStream() hundreds of times? Will it be much slower than the ZipInputStream solution? The zip is opened only once and ZipFile is backed by RandomAccessFile - is this correct? I assume reading is something like: ZipFile zipfile = new ZipFile(argv[0]); Enumeration e = zipfile.entries();//TODO: assure the order of the entries while(e.hasMoreElements()) { entry = (ZipEntry) e.nextElement(); is = zipfile.getInputStream(entry)); } Q3. Are the input streams retrieved from the same ZipFile thread safe (e.g. may I read different entries in different threads simultaneously)? Any performance penalties? Thanks for your answers!

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • Getting Started with Maven + Jaxb project + IntellijIdea

    - by Em Ae
    I am complete new to IntellijIdea and i am looking for some step-by-step process to set up a basic project. My project depends on Maven + Jaxb classes so i need a Maven project so that when i compile this project, the JAXB Objects are generated by Maven plugins. Now i started like this I created a blank project say MaJa project Added Maven Module to it Added following settings in POM.XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>MaJa</groupId> <artifactId>MaJa</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> <executions> <execution> <goals> <goal>xjc</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>${basedir}/src/main/resource/api/MaJa</schemaDirectory> <packageName>com.rimt.shopping.api.web.ws.v1.model</packageName> <outputDirectory>${build.directory}</outputDirectory> </configuration> </plugin> </plugins> </build> </project> First of all, is it right settings ? I tried clicking on Make/Compile 'MaJa' from Project Right Click Menu and it didn't do anything. I will be looking forward to yoru replies.

    Read the article

  • Tomcat JMX connection - Authentication failed.

    - by ziggy
    I am having some problems setting up Tomcat for JMX. I added the following properties to CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=18070 -Dcom.sun.management.jmxremote.password.file=$CATALINA_BASE/conf/jmxremote.password -Dcom.sun .management.jmxremote.ssl=false" And have added the jmxremote.password file in to the conf directory. I wrote a client tool that connects to the JMX server running on port 18070. When i run the client program i get the following error. Exception in thread "main" java.lang.SecurityException: Authentication failed! Credentials required at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticationFailure(JMXPluggableAuthenticator.java:193) at com.sun.jmx.remote.security.JMXPluggableAuthenticator.authenticate(JMXPluggableAuthenticator.java:145) at sun.management.jmxremote.ConnectorBootstrap$AccessFileCheckerAuthenticator.authenticate(ConnectorBootstrap.java:185) at javax.management.remote.rmi.RMIServerImpl.doNewClient(RMIServerImpl.java:213) at javax.management.remote.rmi.RMIServerImpl.newClient(RMIServerImpl.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:619) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:255) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:233) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:142) at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) at javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2312) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:277) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at com.bt.c21sc.c21tkprobe.accessors.C21TkProbeJmxDAO.connect(Unknown Source) at com.bt.c21sc.c21tkprobe.service.C21TkProbeBD.execute(Unknown Source) at com.bt.c21sc.c21tkprobe.C21AppserverProbe.main(Unknown Source) If i change the CATALINA_OPTS properties to CATALINA_OPTS="-Dcom.sun.management.jmxremote.port=18070 -Dcom.sun.management.jmxremote.password.file=$CATALINA_BASE/conf/jmxremote.password -Dcom.sun .management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" Then it works fine. I think what i am confused of is what is classed as remote access. I am running the client program away from the Tomcat instance but both Tomcat and the client tool are on the same machine (i.e. different virtual machines but same environemnt). I thought i had to configure the remote authentication if i access the JMX server remotely from a different machine. By remote access do they mean accessing the JMX server from any VM either locally or remotely?

    Read the article

  • problem in adding image to JFrame

    - by firestruq
    Hi, I'm having problems in adding a picture into JFrame, something is missing probebly or written wrong. here are the classes: main class: public class Tester { public static void main(String args[]) { BorderLayoutFrame borderLayoutFrame = new BorderLayoutFrame(); borderLayoutFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); borderLayoutFrame.setSize(600,600); borderLayoutFrame.setVisible(true); } } public class BorderLayoutFrame extends JFrame implements ActionListener { private JButton buttons[]; // array of buttons to hide portions private final String names[] = { "North", "South", "East", "West", "Center" }; private BorderLayout layout; // borderlayout object private PicPanel picture = new PicPanel(); // set up GUI and event handling public BorderLayoutFrame() { super( "Philosofic Problem" ); layout = new BorderLayout( 5, 5 ); // 5 pixel gaps setLayout( layout ); // set frame layout buttons = new JButton[ names.length ]; // set size of array // create JButtons and register listeners for them for ( int count = 0; count < names.length; count++ ) { buttons[ count ] = new JButton( names[ count ] ); buttons[ count ].addActionListener( this ); } add( buttons[ 0 ], BorderLayout.NORTH ); // add button to north add( buttons[ 1 ], BorderLayout.SOUTH ); // add button to south add( buttons[ 2 ], BorderLayout.EAST ); // add button to east add( buttons[ 3 ], BorderLayout.WEST ); // add button to west add( picture, BorderLayout.CENTER ); // add button to center } // handle button events public void actionPerformed( ActionEvent event ) { } } I'v tried to add the image into the center of layout. here is the image class: public class PicPanel extends JPanel { Image img; private int width = 0; private int height = 0; public PicPanel() { super(); img = Toolkit.getDefaultToolkit().getImage("table.jpg"); } public void paintComponent(Graphics g) { super.paintComponents(g); if ((width <= 0) || (height <= 0)) { width = img.getWidth(this); height = img.getHeight(this); } g.drawImage(img,0,0,width,height,this); } } Please your help, what is the problem? thanks BTW: i'm using eclipse, which directory the image suppose to be in?

    Read the article

  • Zero code coverage with cobertura 1.9.2 but tests are working

    - by eraonel
    I run the code coverage target: <junit fork="yes" dir="${basedir}" failureProperty="test.failed"> <!-- Note the classpath order: instrumented classes are before the original (uninstrumented) classes. This is important. --> <classpath path="${instrumented.dir}" /> <classpath path="${classes.dir}" /> <classpath refid="classpath" /> <!-- The instrumented classes reference classes used by the Cobertura runtime, so Cobertura and its dependencies must be on your classpath. --> <classpath refid="cobertura.classpath" /> <formatter type="xml" /> <!--<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />--> <batchtest fork="yes" todir="${reports.xml.dir}"> <fileset dir="${classes.dir}"> <include name="**/generated/AllTests.class" /> </fileset> </batchtest> </junit> <junitreport todir="${reports.xml.dir}"> <fileset dir="${reports.xml.dir}"> <include name="TEST-*.xml" /> </fileset> <report format="frames" todir="${reports.html.dir}" /> </junitreport> Then I get the following output ( when using fork="true"): java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at net.sourceforge.cobertura.util.FileLocker.lock(FileLocker.java:124) at net.sourceforge.cobertura.coveragedata.ProjectData.saveGlobalProjectData(ProjectData.java:331) at net.sourceforge.cobertura.coveragedata.SaveTimer.run(SaveTimer.java:31) at java.lang.Thread.run(Thread.java:595) Caused by: java.io.IOException: No locks available at sun.nio.ch.FileChannelImpl.lock0(Native Method) at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:784) at java.nio.channels.FileChannel.lock(FileChannel.java:865) ... 8 more --------------------------------------- Unable to get lock on /vobs/rnc/rrt/roam2/roamSs/RoamMao_swb/RoamMao_bldu/ant_build/cobertura.ser.lock: null This is known to happen on Linux kernel 2.6.20. Make sure cobertura.jar is in the root classpath of the jvm process running the instrumented code. If the instrumented code is running in a web server, this means cobertura.jar should be in the web server's lib directory. Don't put multiple copies of cobertura.jar in different WEB-INF/lib directories. Only one classloader should load cobertura. It should be the root classloader. I am using Ant 1.7.0 and cobertura 1.9.2. Any ideas why there is no coverage? Test run ok as I see in my target. I have tried to switch java versions ( 1.5.0_06 and 1.6.0_10) but no difference.

    Read the article

  • How to listen for file system changes MAC - kFSEventStreamCreateFlagWatchRoot

    - by Cocoa Newbie
    Hi All, I am listening for Directory and disk changes in a COCOA project using FSEvents. I need to get events when a root folder is renamed or deleted. So, I passed kFSEventStreamCreateFlagWatchRoot while creating the FSEventStream.But even if I delete or rename the root folder I am not getting corresponding FSEventStreamEventFlags. Any idea what could possibly be the issue. I am listening for changes in a USB mounted device. I used both FSEventStreamCreate and FSEventStreamCreateRelativeToDevice. One thing I notices is when I try with FSEventStreamCreate I get the following error message while creating FSEventStream: (CarbonCore.framework) FSEventStreamCreate: watch_all_parents: error trying to add kqueue for fd 7 (/Volumes/NO NAME; Operation not supported) But with FSEventStreamCreateRelativeToDevice there are no errors but still not getting kFSEventStreamEventFlagRootChanged in event flags. Also, while creation using FSEventStreamCreateRelativeToDevice apple say's if I want to listen to root path changes pass emty string "". But I am not able to listen to root path changes by passing empty string. But when I pass "/" it works. But even for "/" I do not get any proper FSEventStreamEventFlags. I am pasting the code here: -(void) subscribeFileSystemChanges:(NSString*) path { PRINT_FUNCTION_BEGIN; // if already subscribed then unsubscribe if (stream) { FSEventStreamStop(stream); FSEventStreamInvalidate(stream); /* will remove from runloop */ FSEventStreamRelease(stream); } FSEventStreamContext cntxt = {0}; cntxt.info = self; CFArrayRef pathsToWatch = CFArrayCreate(NULL, (const void**)&path, 1, NULL); stream = FSEventStreamCreate(NULL, &feCallback, &cntxt, pathsToWatch, kFSEventStreamEventIdSinceNow, 1, kFSEventStreamCreateFlagWatchRoot ); FSEventStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopDefaultMode); FSEventStreamStart(stream); } call back function: static void feCallback(ConstFSEventStreamRef streamRef, void* pClientCallBackInfo, size_t numEvents, void* pEventPaths, const FSEventStreamEventFlags eventFlags[], const FSEventStreamEventId eventIds[]) {? char** ppPaths = (char**)pEventPaths; int i; for (i = 0; i < numEvents; i++) { NSLog(@"Event Flags %lu Event Id %llu", eventFlags[i], eventIds[i]); NSLog(@"Path changed: %@", [NSString stringWithUTF8String:ppPaths[i]]); } } Thanks a lot in advance.

    Read the article

  • Generate a list of file names based on month and year arithmetic

    - by MacUsers
    How can I list the numbers 01 to 12 (one for each of the 12 months) in such a way so that the current month always comes last where the oldest one is first. In other words, if the number is grater than the current month, it's from the previous year. e.g. 02 is Feb, 2011 (the current month right now), 03 is March, 2010 and 09 is Sep, 2010 but 01 is Jan, 2011. In this case, I'd like to have [09, 03, 01, 02]. This is what I'm doing to determine the year: for inFile in os.listdir('.'): if inFile.isdigit(): month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) I don't have a clue what to do next. What should I do here? Update: I think, I better upload the entire script for better understanding. #!/usr/bin/env python import os, sys from time import strftime from calendar import month_abbr vGroup = {} vo = "group_lhcb" SI00_fig = float(2.478) months = tuple(month_abbr) print "\n%-12s\t%10s\t%8s\t%10s" % ('VOs','CPU-time','CPU-time','kSI2K-hrs') print "%-12s\t%10s\t%8s\t%10s" % ('','(in Sec)','(in Hrs)','(*2.478)') print "=" * 58 for inFile in os.listdir('.'): if inFile.isdigit(): readFile = open(inFile, 'r') lines = readFile.readlines() readFile.close() month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) for line in lines[2:]: if line.find(vo)==0: g, i = line.split() s = vGroup.get(g, 0) vGroup[g] = s + int(i) sumHrs = ((vGroup[g]/60)/60) sumSi2k = sumHrs*SI00_fig print "%-12s\t%10s\t%8s\t%10.2f" % (mnYear,vGroup[g],sumHrs,sumSi2k) del vGroup[g] When I run the script, I get this: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ================================================== Jan, 2011 211201372 58667 145376.83 Dec, 2010 5064337 1406 3484.07 Feb, 2011 17506049 4862 12048.04 Sep, 2010 210874275 58576 145151.33 As I said in the original post, I like the result to be in this order instead: Sep, 2010 210874275 58576 145151.33 Dec, 2010 5064337 1406 3484.07 Jan, 2011 211201372 58667 145376.83 Feb, 2011 17506049 4862 12048.04 The files in the source directory reads like this: [root@serv07 usage]# ls -l total 3632 -rw-r--r-- 1 root root 1144972 Feb 9 19:23 01 -rw-r--r-- 1 root root 556630 Feb 13 09:11 02 -rw-r--r-- 1 root root 443782 Feb 11 17:23 02.bak -rw-r--r-- 1 root root 1144556 Feb 14 09:30 09 -rw-r--r-- 1 root root 370822 Feb 9 19:24 12 Did I give a better picture now? Sorry for not being very clear in the first place. Cheers!! Update @Mark Ransom This is the result from Mark's suggestion: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ========================================================== Dec, 2010 5064337 1406 3484.07 Sep, 2010 210874275 58576 145151.33 Feb, 2011 17506049 4862 12048.04 Jan, 2011 211201372 58667 145376.83 As I said before, I'm looking for the result to b printed in this order: Sep, 2010 - Dec, 2010 - Jan, 2011 - Feb, 2011 Cheers!!

    Read the article

  • How can I implement a site with ASP.NET MVC without using Visual Studio?

    - by Cheeso
    I have seen ASP.NET MVC Without Visual Studio, which asks, Is it possible to produce a website based on ASP.NET MVC, without using Visual Studio? And the accepted answer is, yes. Ok, next question: how? Here's an analogy. If I want to create an ASP.NET Webforms page, I load up my favorite text editor, create a file named Something.aspx. Then I insert into that file, some boilerplate: <%@ Page Language="C#" Debug="true" Trace="false" Src="Sourcefile.cs" Inherits="My.Namespace.ContentsPage" %> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Title goes here </title> <link rel="stylesheet" type="text/css" href="css/style.css"></link> <style type="text/css"> #elementid { font-size: 9pt; color: Navy; ... more css ... } </style> <script type="text/javascript" language='javascript'> // insert javascript here. </script> </head> <body> <asp:Literal Id='Holder' runat='server'/> <br/> <div id='msgs'></div> </body> </html> Then I also create the Sourcefile.cs file: namespace My.Namespace { using System; using System.Web; using System.Xml; // etc... public class ContentsPage : System.Web.UI.Page { protected System.Web.UI.WebControls.Literal Holder; void Page_Load(Object sender, EventArgs e) { // page load logic here } } } And that is a working ASPNET page, created in a text editor. Drop it into an IIS virtual directory, and it's working. What do I have to do, to make a basic, hello, World ASPNET MVC app, in a text editor? (without Visual Studio) Suppose I want a basic MVC app with a controller, one view, and a simple model. What files would I need to create, and what would go into them?

    Read the article

  • How to transform a cached XML via XSL?

    - by TruMan1
    I have a PHP script that caches a remote XML file. I want to XSL transform it before caching, but don't know how to do this: <?php // Set this to your link Id $linkId = "0oiy8Plr697u3puyJy9VTUWfPrCEvEgJR"; // Set this to a directory that has write permissions // for this script $cacheDir = "temp/"; $cachetime = 15 * 60; // 15 minutes // Do not change anything below this line // unless you are absolutely sure $feedUrl="http://mydomain.com/messageService/guestlinkservlet?glId="; $cachefile = $cacheDir .$linkId.".xml"; header('Content-type: text/xml'); // Send from the cache if $cachetime is not exceeded if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo "<!-- Cached ".date('jS F Y H:i', filemtime($cachefile))." -->\n"; exit; } $contents = file_get_contents($feedUrl . $linkId); // show the contents of the XML file echo $contents; // write it to the cache $fp = fopen($cachefile, 'w'); fwrite($fp, $contents); fclose($fp); ?> This is the XSL string I want to use to transform it: <xsl:template match="/"> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <xsl:apply-templates select="messageList" /> </Document> </kml> </xsl:template> <xsl:template match="messageList"> <name>My Generated KML</name> <xsl:apply-templates select="message" /> </xsl:template> <xsl:template match="message"> <Placemark> <name><xsl:value-of select="esnName" /></name> <Point> <coordinates> <xsl:value-of select="latitude" />,<xsl:value-of select="longitude" /> </coordinates> </Point> </Placemark> </xsl:template> I want to actually transform XML input and save/return a KML format. Can someone please help adjust this script? This was given to me and I am a little new to it.

    Read the article

  • Initialization of components with interdependencies - possible antipattern?

    - by Rosarch
    I'm writing a game that has many components. Many of these are dependent upon one another. When creating them, I often get into catch-22 situations like "WorldState's constructor requires a PathPlanner, but PathPlanner's constructor requires WorldState." Originally, this was less of a problem, because references to everything needed were kept around in GameEngine, and GameEngine was passed around to everything. But I didn't like the feel of that, because it felt like we were giving too much access to different components, making it harder to enforce boundaries. Here is the problematic code: /// <summary> /// Constructor to create a new instance of our game. /// </summary> public GameEngine() { graphics = new GraphicsDeviceManager(this); Components.Add(new GamerServicesComponent(this)); //Sets dimensions of the game window graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 600; graphics.ApplyChanges(); IsMouseVisible = true; screenManager = new ScreenManager(this); //Adds ScreenManager as a component, making all of its calls done automatically Components.Add(screenManager); // Tell the program to load all files relative to the "Content" directory. Assets = new CachedContentLoader(this, "Content"); inputReader = new UserInputReader(Constants.DEFAULT_KEY_MAPPING); collisionRecorder = new CollisionRecorder(); WorldState = new WorldState(new ReadWriteXML(), Constants.CONFIG_URI, this, contactReporter); worldQueryUtils = new WorldQueryUtils(worldQuery, WorldState.PhysicsWorld); ContactReporter contactReporter = new ContactReporter(collisionRecorder, worldQuery, worldQueryUtils); gameObjectManager = new GameObjectManager(WorldState, assets, inputReader, pathPlanner); worldQuery = new DefaultWorldQueryEngine(collisionRecorder, gameObjectManager.Controllers); gameObjectManager.WorldQueryEngine = worldQuery; pathPlanner = new PathPlanner(this, worldQueryUtils, WorldQuery); gameObjectManager.PathPlanner = pathPlanner; combatEngine = new CombatEngine(worldQuery, new Random()); } Here is an excerpt of the above that's problematic: gameObjectManager = new GameObjectManager(WorldState, assets, inputReader, pathPlanner); worldQuery = new DefaultWorldQueryEngine(collisionRecorder, gameObjectManager.Controllers); gameObjectManager.WorldQueryEngine = worldQuery; I hope that no one ever forgets that setting of gameObjectManager.WorldQueryEngine, or else it will fail. Here is the problem: gameObjectManager needs a WorldQuery, and WorldQuery needs a property of gameObjectManager. What can I do about this? Have I found an anti-pattern?

    Read the article

  • "state is undetermined" when starting Grails app on CloudFoundry

    - by SeattleStephens
    I have a Grails app that has been running on CloudFoundry for months. I updated the app from Grails 2.0.4 to 2.1.0 and also updated a few plugins I have been using. Now when I push the app to CloudFoundry and do the start, I receive the error: 'appname' state is undetermined, not enough information available. The tomcat Catalina log shows the NoClassDefFoundError below. I've read about issues with the Ivy cache but have not looked into that yet. I have updated vmc to the latest version (0.3.18). SEVERE: Error deploying web application directory ROOT java.lang.NoClassDefFoundError: org/apache/tomcat/PeriodicEventListener at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) at java.lang.ClassLoader.defineClass(ClassLoader.java:616) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2818) at org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1159) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1647) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1128) at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1026) at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4421) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4734) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:799) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:779) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:601) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1079) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1002) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:506) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1317) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:324) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1065) at org.apache.catalina.core.StandardHost.start(StandardHost.java:840) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1057) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) at org.apache.catalina.core.StandardService.start(StandardService.java:525) at org.apache.catalina.core.StandardServer.start(StandardServer.java:754) at org.apache.catalina.startup.Catalina.start(Catalina.java:595) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: java.lang.ClassNotFoundException: org.apache.tomcat.PeriodicEventListener at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1680) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1526) ... 34 more

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • PHP GD Allowed memory size exhausted

    - by gurun8
    I'm trying to process a directory of JPEG images (roughly 600+, ranging from 50k to 500k) using PHP: GD to resize and save the images but I've hit a bit of a snag quite early in the process. After correctly processing just 3 images (30K, 18K and 231K) I get a Allowed memory size of 16777216 bytes exhausted PHP Fatal error. I'm cycling through the images and calling the code below: list($w, $h) = getimagesize($src); if ($w > $it->width) { $newwidth = $it->width; $newheight = round(($newwidth * $h) / $w); } elseif ($w > $it->height) { $newheight = $it->height; $newwidth = round(($newheight * $w) / $h); } else { $newwidth = $w; $newheight = $h; } // create resize image $img = imagecreatetruecolor($newwidth, $newheight); $org = imagecreatefromjpeg($src); // Resize imagecopyresized($img, $org, 0, 0, 0, 0, $newwidth, $newheight, $w, $h); imagedestroy($org); imagejpeg($img, $dest); // Free up memory imagedestroy($img); I've tried to free up memory with the imagedestroy function but it doesn't seem to have any affect. The script just keeps consistently choking at the imagecreatefromjpeg line of code. I checked the php.ini and the memory_limit = 16M setting seems like it's holding correctly. But I can't figure out why the memory is filling up. Shouldn't it be releasing the memory back to the garbage collector? I don't really want to increase the memory_limit setting. This seems like a bad workaround that could potentially lead to more issues in the future. FYI: I'm running my script from a command prompt. It shouldn't affect the functionality but might influence your response so I thought I should mention it. Can anyone see if I'm just missing something simple or if there's a design flaw here? You'd think that this would be a pretty straightforward task. Surely this has to be possible, right?

    Read the article

  • How to set which version of the VC++ runtime Visual Studio 2005 targets

    - by TallGuy
    I have an application that contains a VC++ project (along with C# projects). Previously, (i.e. during the last year or so) when a build has been done, Visual Studio 2005 appears to be targeting the VC++ runtime version 8.0.50727.762. At least, that is what the Assembly.dll.intermediate.manifest file is telling me: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> This version number matches the Visual Studio 2005 version number. The application worked fine when deployed to the webserver. The sun was shining, the birds were singing and all was right with the world. Now something has changed. I don't know what - a security patch, an obscure Visual Studio setting or something. Now Visual Studio 2005 seems to be targeting the wrong version of the VC++ runtime: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.4053' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> When I deploy the application to the webserver, I get the dreaded This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. (Exception from HRESULT: 0x800736B1) error. This problem occurs even when I recompile previous versions of the application. I can absolutely guarantee that nothing at all has changed in the solution - we zip up the entire contents of the solution as part of the build process and archive it. I have unzipped a number of these to a temp directory, verified that the previous manifest file refers to 8.0.50727.762, recompiled using exactly the same command at the command line and then verified that the new manifest file now refers to 8.0.50727.4053. I am using Microsoft Visual Studio 2005 Version 8.0.50727.762 (SP.050727-7600) and Microsoft Visual C++ 2005 77646-008-0000007-41610. Why would Visual Studio revert to a previous version of the VC++ runtime? How do I specify which version it should use? What is going wrong here?

    Read the article

  • How do I get the Jetty 6 FileServerXml example to work?

    - by Norm
    I have followed along with the Tutorial and the examples. Yet I always get this error when I copy and paste the code: Exception in thread "main" java.lang.NullPointerException at net.test.FileServerXml.main(FileServerXml.java:13) In Eclipse I have the directory structured as such: package net.test -FileServerXml.java -fileserver.xml -index.html FileServerXml.java package net.test; import org.mortbay.jetty.Server; import org.mortbay.resource.Resource; import org.mortbay.xml.XmlConfiguration; public class FileServerXml { public static void main(String[] args) throws Exception { Resource fileserver_xml = Resource.newSystemResource("fileserver.xml"); XmlConfiguration configuration = new XmlConfiguration(fileserver_xml.getInputStream()); Server server = (Server)configuration.configure(); server.start(); server.join(); } } ` fileserver.xml `<?xml version="1.0"?> <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <Set name="port">8080</Set> </New> </Arg> </Call> <Set name="handler"> <New class="org.eclipse.jetty.server.handler.HandlerList"> <Set name="handlers"> <Array type="org.eclipse.jetty.server.Handler"> <Item> <New class="org.eclipse.jetty.server.handler.ResourceHandler"> <Set name="directoriesListed">true</Set> <Set name="welcomeFiles"> <Array type="String"><Item>index.html</Item></Array> </Set> <Set name="resourceBase">.</Set> </New> </Item> <Item> <New class="org.eclipse.jetty.server.handler.DefaultHandler"> </New> </Item> </Array> </Set> </New> </Set> ` Thanks for your input. Norm

    Read the article

  • How can I implement ASP.NET MVC without using Visual Studio?

    - by Cheeso
    I have seen ASP.NET MVC Without Visual Studio, which asks, Is it possible to produce a website based on ASP.NET MVC, without using Visual Studio? And the accepted answer is, yes. Ok, next question: how? Here's an analogy. If I want to create an ASP.NET Webforms page, I load up my favorite text editor, create a file named Something.aspx. Then I insert into that file, some boilerplate: <%@ Page Language="C#" Debug="true" Trace="false" Src="Sourcefile.cs" Inherits="My.Namespace.ContentsPage" %> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Title goes here </title> <link rel="stylesheet" type="text/css" href="css/style.css"></link> <style type="text/css"> #elementid { font-size: 9pt; color: Navy; ... more css ... } </style> <script type="text/javascript" language='javascript'> // insert javascript here. </script> </head> <body> <asp:Literal Id='Holder' runat='server'/> <br/> <div id='msgs'></div> </body> </html> Then I also create the Sourcefile.cs file: namespace My.Namespace { using System; using System.Web; using System.Xml; // etc... public class ContentsPage : System.Web.UI.Page { protected System.Web.UI.WebControls.Literal Holder; void Page_Load(Object sender, EventArgs e) { // page load logic here } } } And that is a working ASPNET page, created in a text editor. Drop it into an IIS virtual directory, and it's working. What do I have to do, to make a basic, hello, World ASPNET MVC app, in a text editor? (without Visual Studio)

    Read the article

  • Parallel programming in C#

    - by Alxandr
    I'm interested in learning about parallel programming in C#.NET (not like everything there is to know, but the basics and maybe some good-practices), therefore I've decided to reprogram an old program of mine which is called ImageSyncer. ImageSyncer is a really simple program, all it does is to scan trough a folder and find all files ending with .jpg, then it calculates the new position of the files based on the date they were taken (parsing of xif-data, or whatever it's called). After a location has been generated the program checks for any existing files at that location, and if one exist it looks at the last write-time of both the file to copy, and the file "in its way". If those are equal the file is skipped. If not a md5 checksum of both files is created and matched. If there is no match the file to be copied is given a new location to be copied to (for instance, if it was to be copied to "C:\test.jpg" it's copied to "C:\test(1).jpg" instead). The result of this operation is populated into a queue of a struct-type that contains two strings, the original file and the position to copy it to. Then that queue is iterated over untill it is empty and the files are copied. In other words there are 4 operations: 1. Scan directory for jpegs 2. Parse files for xif and generate copy-location 3. Check for file existence and if needed generate new path 4. Copy files And so I want to rewrite this program to make it paralell and be able to perform several of the operations at the same time, and I was wondering what the best way to achieve that would be. I've came up with two different models I can think of, but neither one of them might be any good at all. The first one is to parallelize the 4 steps of the old program, so that when step one is to be executed it's done on several threads, and when the entire of step 1 is finished step 2 is began. The other one (which I find more interesting because I have no idea of how to do that) is to create a sort of worker and consumer model, so when a thread is finished with step 1 another one takes over and performs step 2 at that object (or something like that). But as said, I don't know if any of these are any good solutions. Also, I don't know much about parallel programming at all. I know how to make a thread, and how to make it perform a function taking in an object as its only parameter, and I've also used the BackgroundWorker-class on one occasion, but I'm not that familiar with any of them. Any input would be appreciated.

    Read the article

  • Getresponse not working after authentication

    - by Hazler
    For starters, here's my code: // Create a request using a URL that can receive a post. WebRequest request = WebRequest.Create("http://mydomain.com/cms/csharptest.php"); request.Credentials = new NetworkCredential("myUser", "myPass"); // Set the Method property of the request to POST. request.Method = "POST"; // Create POST data and convert it to a byte array. string postData = "name=PersonName&age=25"; byte[] byteArray = Encoding.UTF8.GetBytes(postData); // Set the ContentType property of the WebRequest. request.ContentType = "application/x-www-form-urlencoded"; // Set the ContentLength property of the WebRequest. request.ContentLength = byteArray.Length; // Get the request stream. Stream dataStream = request.GetRequestStream(); // Write the data to the request stream. dataStream.Write(byteArray, 0, byteArray.Length); // Close the Stream object. dataStream.Close(); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display the status. Console.WriteLine((response).StatusDescription); // Get the stream containing content returned by the server. dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Display the content. Console.WriteLine(responseFromServer); // Clean up the streams. reader.Close(); dataStream.Close(); response.Close(); The directory cms/ requires authentication, but if I try running this same code somewhere, where authentication isn't needed, it works fine. The error (System.Net.WebException: The remote server returned an error: (403) Forbidden) occurs at HttpWebResponse response = (HttpWebResponse)request.GetResponse(); I have managed in reading data after authenticating, but not if I also send POST data. What's wrong with this?

    Read the article

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

  • Sqlite issues with HTC Desire HD

    - by Greg
    Recently I have been getting a lot of complaints about the HTC Desire series and it failing while invoking sql statements. I have received reports from users with log snapshots that contain the following. I/Database( 2348): sqlite returned: error code = 8, msg = statement aborts at 1: [pragma journal_mode = WAL;] E/Database( 2348): sqlite3_exec to set journal_mode of /data/data/my.app.package/files/localized_db_en_uk-1.sqlite to WAL failed followed by my app basically burning in flames because the call to open the database results in a serious runtime error that manifests itself as the cursor being left open. There shouldn't be a cursor at this point as we are trying to open it. This only occurs with the HTC Desire HD and Z. My code basically does the following (changed a little to isolate the problem area). SQLiteDatabase db; String dbName; public SQLiteDatabase loadDb(Context context) throws IOException{ //Close any old db handle if (db != null && db.isOpen()) { db.close(); } // The name of the database to use from the bundled assets. String dbAsset = "/asset_dir/"+dbName+".sqlite"; InputStream myInput = context.getAssets().open(dbAsset, Context.MODE_PRIVATE); // Create a file in the app's file directory since sqlite requires a path // Not ideal but we will copy the file out of our bundled assets and open it // it in another location. FileOutputStream myOutput = context.openFileOutput(dbName, Context.MODE_PRIVATE); byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer)) > 0) { myOutput.write(buffer, 0, length); } // Close the streams myOutput.flush(); // Guarantee Write! myOutput.getFD().sync(); myOutput.close(); myInput.close(); // Not grab the newly written file File fileObj = context.getFileStreamPath(dbName); // and open the database return db = SQLiteDatabase.openDatabase(fileObj.getAbsolutePath(), null, SQLiteDatabase.OPEN_READONLY | SQLiteDatabase.NO_LOCALIZED_COLLATORS); } Sadly this phone is only available in the UK and I don't have one in my inventory. I am only getting reports of this type from the HTC Desire series. I don't know what changed as this code has been working without any problem. Is there something I am missing?

    Read the article

  • Xcode + Perforce: it frequently shows me the spinning wheel for no reason! What can i do?

    - by GamingHorror
    While working on an Xcode project i keep getting the spinning wheel while switching files, scrolling, searching, typing, debugging, removing breakpoints, switching back from another app or saving. It also happens before compiling but usually it just happens from time to time for no apparent reason. This is the second time this started happening in a Xcode project and it's driving me nuts. It completely breaks my flow of work to have to wait for the spinning wheel to go away (2-5 seconds). What on earth could i possibly do to ... figure out what's causing the problem? resolve the problem? More details: When any project is small, everything is super-smooth with Xcode and Perforce. Two of my projects eventually had this spinning wheel problem after about 4 weeks of work. It only happened with those two projects so far. They consist of around 1000-1200 files in source control, most of them assets. The problem occurs even if i manually check out the whole project in Perforce. The problem is gone when i copy the project directory and work in the copy which is no longer under source control, or if i create a branch in Perforce and work in the branch (under source control). One of these projects i shared with a colleague, and he had exactly the same issues on his Mac. We eventually switched to Subversion and the spinning-wheel issue immediately went away. Now that i've received an updated copy of the project and simply put it under Perforce as a new project, the problem also went away (so far it did not resurface). It leads me to think that it may be caused by a larger number of file revisions. The server itself (version 2009.1) is on a different (Windows) machine on my LAN, so there's definetely no Internet lag involved. The complete repository is just 1 GB in size spread over a dozen projects or so. I'm sorry if the question seems more like a support inquiry for Perforce. However i'm using the free version of Perforce so i'm not entitled to get support from them. I hope no one minds me asking here. I'm really bummed out by this. I don't want to have to create a new branch for the project each time the spinning wheel problem surfaces.

    Read the article

  • Mercurial local repository backup

    - by Ricket
    I'm a big fan of backing things up. I keep my important school essays and such in a folder of my Dropbox. I make sure that all of my photos are duplicated to an external drive. I have a home server where I keep important files mirrored across two drives inside the server (like a software RAID 1). So for my code, I have always used Subversion to back it up. I keep the trunk folder with a stable copy of my application, but then I create a branch named with my username, and inside there is my working copy. I make very few changes between commits to that branch, with the understanding that the code in there is my backup. Now I'm looking into Mercurial, and I must admit I haven't truly used it yet so I may have this all wrong. But it seems to me that you have a server-side repository, and then you clone it to a working directory in the form of a local repository. Then as you work on something, you make commits to that local repository, and when things are in a state to be shared with others, you hg push to the parent repository on the server. Between pushes of stable, tested, bug-free code, where is the backup? After doing some thinking, I've come to the conclusion that it is not meant for backup purposes and it assumes you've handled that on your own. I guess I need to keep my Mercurial local repositories in my dropbox or some other backed-up location, since my in-progress code is not pushed to the server. Is this pretty much it, or have I missed something? If you use Mercurial, how do you backup your local repositories? If you had turned on your computer this morning and your hard drive went up in flames (or, more likely, the read head went bad, or the OS corrupted itself, ...), what would be lost? If you spent the past week developing a module, writing test cases for it, documenting and commenting it, and then a virus wipes your local repository away, isn't that the only copy? So then on the flip side, do you create a remote repository for every local repository and push to it all the time? How do you find a balance? How do you ensure your code is backed up? Where is the line between using Mercurial as backup, and using a local filesystem backup utility to keep your local repositories safe?

    Read the article

  • Hibernate3: Self-Referencing Objects

    - by monojohnny
    Need some help on understanding how to do this; I'm going to be running recursive 'find' on a file system and I want to keep the information in a single DB table - with a self-referencing hierarchial structure: This is my DB Table structure I want to populate. DirObject Table: id int NOT NULL, name varchar(255) NOT NULL, parentid int NOT NULL); Here is the proposed Java Class I want to map (Fields only shown): public DirObject { int id; String name; DirObject parent; ... For the 'root' directory was going to use parentid=0; real ids will start at 1, and ideally I want hibernate to autogenerate the ids. Can somebody provide a suggested mapping file for this please; as a secondary question I thought about doing the Java Class like this instead: public DirObject { int id; String name; List<DirObject> subdirs; Could I use the same data model for either of these two methods ? (With a different mapping file of course). --- UPDATE: so I tried the mapping file suggested below (thanks!), repeated here for reference: <hibernate-mapping> <class name="my.proj.DirObject" table="category"> ... <set name="subDirs" lazy="true" inverse="true"> <key column="parentId"/> <one-to-many class="my.proj.DirObject"/> </set> <many-to-one name="parent" class="my.proj.DirObject" column="parentId" cascade="all" /> </class> ...and altered my Java class to have BOTH 'parentid' and 'getSubDirs' [returning a 'HashSet']. This appears to work - thanks, but this is the test code I used to drive this - I think I'm not doing something right here, because I thought Hibernate would take care of saving the subordinate objects in the Set without me having to do this explicitly ? DirObject dirobject=new DirObject(); dirobject.setName("/files"); dirobject.setParent(dirobject); DirObject d1, d2; d1=new DirObject(); d1.setName("subdir1"); d1.setParent(dirobject); d2=new DirObject(); d2.setName("subdir2"); d2.setParent(dirobject); HashSet<DirObject> subdirs=new HashSet<DirObject>(); subdirs.add(d1); subdirs.add(d2); dirobject.setSubdirs(subdirs); session.save(dirobject); session.save(d1); session.save(d2);

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >