Search Results

Search found 18475 results on 739 pages for 'non exist'.

Page 577/739 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • What exactly is the difference between the Dreamhost IDE and Netbeans?

    - by mikemick
    I just started using Netbeans about a week ago, and really like it thus far. Now I'm seeing something about Dreamhost IDE which I guess is a program that is built using the Netbeans platform. I use Dreamhost as the hosting company for many of my projects. What is the benefit of using Dreamhost IDE over Netbeans? Documentation on the software is non-existent from what I can tell (not even a mention in the Dreamhost wiki). All I was able to find was a short description of what it was on a Sourceforge download page, and I found a short silent video on YouTube demoing it. So I guess I'm asking, what features is it bringing to the table, and what is the difference between it and Netbeans? The description on the Sourceforge page is as follows (typos retained)... DreamHost IDE is php and ruby integrated development environment built on NetBeans IDE and provides easy deploy of your applications to the DreamHost services. Also provides you an easy eay hew to setup these services. Maybe the answer is in the description, and I just don't comprehend it?

    Read the article

  • NSKeyedArchiver on NSArray has large size overhead

    - by redguy
    I'm using NSKeyedArchiver in Mac OS X program which generates data for iPhone application. I found out that by default, resulting archives are much bigger than I expected. Example: NSMutableArray * ar = [NSMutableArray arrayWithCapacity:10]; for (int i = 0; i < 100000; i++) { NSString * s = [NSString stringWithFormat:@"item%06d", i]; [ar addObject:s]; } [NSKeyedArchiver archiveRootObject:ar toFile: @"NSKeyedArchiver.test"]; This stores 10 * 100000 = 1M bytes of useful data, yet the size of the resulting file is almost three megabytes. The overhead seems to be growing with number of items in the array. In this case, for 1000 items, the file was about 22k. "file" reports that it is a "Apple binary property list" (not the XML format). Is there an simple way to prevent this huge overhead? I wanted to use the NSKeyedArchiver for the simplicity it provides. I can write data to my own, non-generic, binary format, but that's not very elegant. Also, aggregating the data into large chunks and feeding these to the NSKeyedArchiver should work, but again, that kinda beats the point of using simple&easy&ready to use archiver. Am I missing some method call or usage pattern that would reduce this overhead?

    Read the article

  • Run ajax scripts on page when navigating with ajax?

    - by Oskar Kjellin
    I got a bit of an issue in my ASP.NET MVC project. I have a chat div in the bottom right corner (like facebook), and of course I do not want this to reload when navigating to all my navigation is ajax. The problem I am facing is that I use the following code on the top of the view page: <script type="text/javascript"> $(document).ready(function() { $('#divTS').hide(); $('a#showTS').click(function() { $('#divTS').slideToggle(400); return false; }); }); </script> The problem is that this code is only loaded with ajax and does not seem to fire? I would like to run all scripts in the newly loaded view, just as if I hadn't navigated with ajax. I cannot put this in the site.master as it only loads once and then probably the divs I am trying to hide doesn't exist. Is there a good way to run scripts in the ajax-loaded div?

    Read the article

  • Large Reports for MSRS

    - by Greg Lorenz
    I have a report that needs to be able to render a very large amount of pages (about 4500 in this instance) in a web browser. The total time needed to finish on the report server from start time to end time is about 30 mins for the instance that I am looking at. Does anyone know what options exist for handling the rendering of such a large report in a web browser? In terms of looking into how this can be resolved I have already performed the following tasks. The report gets its data off of a database table that already has the data flattened to the point that the TimeDataRetrieval on the report server is 17812 or about 18 secs. The report itself has been reformatted to include the least expensive report objects that it can in order to render the data in the correct format. I basically consists of a table with about 4 nested tables and thats it. We were trying to accomplish this on a 2005 report server but continued to run into memory issues that were not feasible for our clients. In response to that we moved this onto a 2008 report server to take advantage of the fact that it uses the file system instead of memory and finally were able to get this to work without running out of the available memory but of course it takes much longer.

    Read the article

  • Check for modification failure in content Integration using VisualSVN Server and Cruisecontrol.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, can any one tell how to tell svn that these files are to be deleted from repository through command line. i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • Why Java language does not offer a way to declare getters and setters of a given "field" through ann

    - by zim2001
    I actually happily design and develop JEE Applications for quite 9 years, but I realized recently that as time goes by, I feel more and more fed up of dragging all these ugly bean classes with their bunch of getters and setters. Considering a basic bean like this : public class MyBean { // needs getter AND setter private int myField1; // needs only a getter, no setter private int myField2; // needs only a setter, no getter private int myField3; /** * Get the field1 * @return the field1 */ public int getField1() { return myField1; } /** * Set the field1 * @param value the value */ public void setField1(int value) { myField1 = value; } /** * Get the field2 * @return the field2 */ public int getField2() { return myField2; } /** * Set the field3 * @param value the value */ public void setField3(int value) { myField3 = value; } } I'm dreaming of something like this : public class MyBean { @inout(public,public) private int myField1; @out(public) private int myField2; @in(public) private int myField3; } No more stupid javadoc, just tell the important thing... It would still be possible to mix annotation and written down getters or setters, to cover cases when it should do non-trivial sets and gets. In other words, annotation would auto-generate the getter / setter code piece except when a literate one is provided. Moreover, I'm also dreaming of replacing things like that : MyBean b = new MyBean(); int v = b.getField1(); b.setField3(v+1); by such : MyBean b = new MyBean(); int v = b.field1; b.field3 = v+1; In fact, writing "b.field1" on the right side of an expression would be semantically identical to write "b.getField1()", I mean as if it has been replaced by some kind of a preprocessor. It's just an idea but I'm wondering if I'm alone on that topic, and also if it has major flaws. I'm aware that this question doesn't exactly meet the SO credo (we prefer questions that can be answered, not just discussed) so I flag it community wiki...

    Read the article

  • Calling Model Functions from a Library

    - by Abs
    Hello all, I have turned a normal PHP class into a library so I can use it in Codeigniter as a library. I can load it and call the functions I need in that class. Here is that class to help with the question. However, there are quite a few points where I have to call functions in my class. These functions reside in the model that instantiated my class. How can I do this as currently normal calls don't work. Here is my code: class Controlpanel_model extends Model { var $category = ''; var $dataa = 'a'; function Controlpanel_model(){ parent::Model(); } function import_browser_bookmarks(){ $this->load->library('BookmarkParser'); /* *In this function call to the class I pass * what model functions exist that it should call * You can view how this done by clicking the link above and looking at line 383 */ $this->bookmarkparser->parseNetscape("./bookmarks.html", 0, 'myURL', 'myFolder'); return $this->dataa; } function myURL($data, $depth, $no) { $category = $this->category; $this->dataa .= 'Tag = '.$category.'<br />'.'URL = '.$data["url"].'<br />'.'Title = '.$data["descr"].'<br />'.'<br /><br />'; } function myFolder($data, $depth, $no) { $this->category = $data["name"]; } } Thanks all for any help.

    Read the article

  • is delete p where p is a pointer to array a memory leak ?

    - by Eli
    following a discussion in a software meeting I setup to find out if deleting an dynamically allocated primitive array with plain delete will cause a memory leak. I have written this tiny program and compiled with visual studio 2008 running on windows XP: #include "stdafx.h" #include "Windows.h" const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*1000; i++) { int* p = new int[1024*100000]; for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2; Sleep(1000); delete p; } } I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected I've modified my test program to allocate a non primitive type array : #include "stdafx.h" #include "Windows.h" struct aStruct { aStruct() : i(1), j(0) {} int i; char j; } NonePrimitive; const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*100000; i++) { aStruct* p = new aStruct[1024*100000]; Sleep(1000); delete p; } } after running for for 10 minutes there was no meaningful increase in memory I compiled the project with warning level 4 and got no warnings. is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?

    Read the article

  • Which Perl modules can be installed just by copying lib files?

    - by elliot100
    I'm an absolute beginner at Perl, and am trying to use some non-core modules on my shared Linux web host. I have no command line access, only FTP. Host admins will consider installing modules on request, but the ones I want to use are updated frequently (DateTime::TimeZone for example), and I'd prefer to have control over exactly which version I'm using. By experimentation, I've found some modules can be installed by copying files from the module's lib directory to a directory on the host, and using use lib "local_path"; in my script, i.e. no compiling is required to install (DateTime and DateTime::TimeZone again). How can I tell whether this is the case for a particular module? I realise I'll have to resolve dependencies myself. Additionally: if I wanted to be able to install any module, including those which require compiling, what would I be looking for in terms of hosting? I'm guessing at the moment I share a VM with several others and the minimum provision I'd need would be a dedicated VM with shell access?

    Read the article

  • WTK emulator bluetooth connection problem

    - by Gokhan B.
    Hi! I'm developing a J2ME program with eclipse / WTK 2.5.2 and having problem with connecting two emulators using bluetooth. There is one server and one .client running on two different emulators. The problem is client program cannot discover any bluetooth device. Here is the server and client codes: public Server() { try { LocalDevice local = LocalDevice.getLocalDevice(); local.setDiscoverable(DiscoveryAgent.GIAC); server = (StreamConnectionNotifier) Connector.open("btspp://localhost:" + UUID_STRING + ";name=" + SERVICE_NAME); Util.Log("EchoServer() Server connector open!"); } catch (Exception e) {} } after calling Connector.open, I get following warning in console, which i believe is related: Warning: Unregistered device: unspecified and client code that searches for devices: public SearchForDevices(String uuid, String nm) { UUIDStr = uuid; srchServiceName = nm; try { LocalDevice local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); deviceList = new Vector(); agent.startInquiry(DiscoveryAgent.GIAC, this); // non-blocking } catch (Exception e) {} } system never calls deviceDiscovered, but calls inquiryCompleted() with INQUIRY_COMPLETED paramter, so I suppose client program runs fine. Bluetooth is enabled at emulator settings.. any ideas ?

    Read the article

  • Unable to connect to UNC share with WindowsIdentity.Impersonate, but works fine using LogonUser

    - by Rob
    Hopefully I'm not missing something obvious here, but I have a class that needs to create some directories on a UNC share and then move files to the new directory. When we connect using LogonUser things work fine with no errors, but when we try and use the user indicated by Integrated Windows authentication we run into problems. Here's some working and non-working code to give you an idea what is going on. The following works and logs the requested information: [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] private static extern bool CloseHandle(IntPtr handle); IntPtr token; WindowsIdentity wi; if (LogonUser("user", "network", "password", 8, // LOGON32_LOGON_NETWORK_CLEARTEXT 0, // LOGON32_PROVIDER_DEFAULT out token)) { wi = new WindowsIdentity(token); WindowsImpersonationContext wic = wi.Impersonate(); Logging.LogMessage(System.Security.Principal.WindowsIdentity.GetCurrent().Name); Logging.LogMessage(path); DirectoryInfo info = new DirectoryInfo(path); Logging.LogMessage(info.Exists.ToString()); Logging.LogMessage(info.Name); Logging.LogMessage("LastAccessTime:" + info.LastAccessTime.ToString()); Logging.LogMessage("LastWriteTime:" + info.LastWriteTime.ToString()); wic.Undo(); CloseHandle(token); } The following fails and gives an error message indicating the network name is not available, but the correct user name is indicated by GetCurrent().Name: WindowsIdentity identity = (WindowsIdentity)HttpContext.Current.User.Identity; using (identity.Impersonate()) { Logging.LogMessage(System.Security.Principal.WindowsIdentity.GetCurrent().Name); Logging.LogMessage(path); DirectoryInfo info = new DirectoryInfo(path); Logging.LogMessage(info.Exists.ToString()); Logging.LogMessage(info.Name); Logging.LogMessage("LastAccessTime:" + info.LastAccessTime.ToString()); Logging.LogMessage("LastWriteTime:" + info.LastWriteTime.ToString()); }

    Read the article

  • How can I get the Hibernate Configuration object from Spring?

    - by Wayne Russell
    Hi, I am trying to obtain Spring-defined Hibernate Configuration and SessionFactory objects in my non-Spring code. The following is the definition in my applicationContext.xml file: Code: <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.cglib.use_reflection_optimizer">true</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</prop> </props> </property> <property name="dataSource"> <ref bean="dataSource"/> </property> </bean> If I now call getBean("sessionFactory"), I am returned a $Proxy0 object which appears to be a proxy for the Hibernate SessionFactory object. But that isn't what I want - I need the LocalSessionFactoryBean itself because I need access to the Configuration as well as the SessionFactory. The reason I need the Configuration object is that our framework is able to use Hibernate's dynamic model to automatically insert mappings at runtime; this requires that we change the Configuration and rebuild the SessionFactory. Really, all we're trying to do is obtain the Hibernate config that already exists in Spring so that those of our customers that already have that information in Spring don't need to duplicate it into a hibernate.cfg.xml file in order to use our Hibernate features.

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • django: can't adapt error when importing data from postgres database

    - by Oleg Tarasenko
    Hi, I'm having strange error with installing fixture from dumped data. I am using psycopg2, and django1.1.1 silver:probsbox oleg$ python manage.py loaddata /Users/oleg/probs.json Installing json fixture '/Users/oleg/probs' from '/Users/oleg/probs'. Problem installing fixture '/Users/oleg/probs.json': Traceback (most recent call last): File "/opt/local/lib/python2.5/site-packages/django/core/management/commands/loaddata.py", line 153, in handle obj.save() File "/opt/local/lib/python2.5/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/opt/local/lib/python2.5/site-packages/django/db/models/base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "/opt/local/lib/python2.5/site-packages/django/db/models/manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "/opt/local/lib/python2.5/site-packages/django/db/models/query.py", line 1087, in insert_query return query.execute_sql(return_id) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/opt/local/lib/python2.5/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) ProgrammingError: can't adapt First I've checked similar issues on internet. This one seemed to be very related: http://code.djangoproject.com/ticket/5996, as my data has many non ASCII symbols But actually I've checked my django installation and it's ok there Could you advice what is wrong

    Read the article

  • Why does calling abort() on ajax request cause error in ASP.Net MVC (IE8)

    - by user169867
    I use jquery to post to an MVC controller action that returns a table of information. The user of the page triggers this by clicking on various links. In the event the user decides to click a bunch of these links in quick succession I wanted to cancel any previous ajax request that may not have finished. I've found that when I do this (although its fine from the client's POV) I will get errors on the web application saying that "The parameters dictionary contains a null entry for parameter srtCol of non-nullable type 'System.Int32'" Now the ajax post deffinately passes in all the parameters, and if I don't try and cancel the ajax request it works just fine. But if I do cancel the request by calling abort() on the XMLHttpRequest object that ajax() returns before it finishes I get the error from ASP.Net MVC. Example: //Cancel any pevious request if (req) { req.abort(); req = null; } //Make new request req= $.ajax({ type: 'POST', url: "/Myapp/GetTbl", data: {srtCol: srt, view: viewID}, success: OnSuccess, error: OnError, dataType: "html" }); I've noticed this only happen is IE8. In FF it seems to not cuase a problem. Does anyone know how to cancel an ajax request in IE8 without causing errors for MVC? Thanks for any help.

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • Forking in PHP on Windows

    - by Doug Kavendek
    We are running PHP on a Windows server (a source of many problems indeed, but migrating is not an option currently). There are a few points where a user-initiated action will need to kick off a few things that take a while and about which the user doesn't need to know if they succeed or fail, such as sending off an email or making sure some third-party accounts are updated. If I could just fork with pcntl_fork(), this would be very simple, but the PCNTL functions are not available in Windows. It seems the closest I can get is to do something of this nature: exec( 'php-cgi.exe somescript.php' ); However, this would be far more complicated. The actions I need to kick off rely on a lot of context that already will exist in the running process; to use the above example, I'd need to figure out the essential data and supply it to the new script in some way. If I could fork, it'd just be a matter of letting the parent process return early, leaving the child to work on a few more things. I've found a few people talking about their own work in getting various PCNTL functions compiled on Windows, but none seemed to have anything available (broken links, etc). Despite this question having practically the same name as mine, it seems the problem was more execution timeout than needing to fork. So, is my best option to just refactor a bit to deal with calling php-cgi, or are there other options? Edit: It seems exec() won't work for this, at least not without me figuring some other aspect of it, as it waits until the call returns. I figured I could use START, sort of like exec( 'start php-cgi.exe somescript.php' );, but it still waits until the other script finishes.

    Read the article

  • SSL with private key on an HSM

    - by Jason
    I have a client-server architecture in my application that uses SSL. Currently, the private key is stored in CAPI's key store location. For security reasons, I'd like to store the key in a safer place, ideally a hardware signing module (HSM) that is built for this purpose. Unfortunately, with the private key stored on such a device, I can't figure out how to use it in my application. On the server, I am simply using the SslStream class and the AuthenticateAsServer(...) call. This method takes an X509Certificate object that has its private key loaded, but since the private key is stored in a secure (e.g. non exportable) location on the HSM, I don't know how to do this. On the client, I am using an HttpWebRequest object and then using the ClientCertificates property to add my client authentication certificate, but I have the same problem here: how do I get the private key? I know there are some HSMs that act as SSL accelerators but I don't really need an accelerator. Also, these products tend to have special integration with web servers such as IIS and Apache which I'm not using. Any ideas? The only thing I can think of would be to write my own SSL library that would allow me to hand off the signing portion of the transaction to the HSM, but this seems like a huge amount of work.

    Read the article

  • Templates vs. coded HTML

    - by Alan Harris-Reid
    I have a web-app consisting of some html forms for maintaining some tables (SQlite, with CherryPy for web-server stuff). First I did it entirely 'the Python way', and generated html strings via. code, with common headers, footers, etc. defined as functions in a separate module. I also like the idea of templates, so I tried Jinja2, which I find quite developer-friendly. In the beginning I thought templates were the way to go, but that was when pages were simple. Once .css and .js files were introduced (not necessarily in the same folder as the .html files), and an ever-increasing number of {{...}} variables and {%...%} commands were introduced, things started getting messy at design-time, even though they looked great at run-time. Things got even more difficult when I needed additional javascript in the or sections. As far as I can see, the main advantages of using templates are: Non-dynamic elements of page can easily be viewed in browser during design. Except for {} placeholders, html is kept separate from python code. If your company has a web-page designer, they can still design without knowing Python. while some disadvantages are: {{}} delimiters visible when viewed at design-time in browser Associated .css and .js files have to be in same folder to see effects in browser at design-time. Data, variables, lists, etc., must be prepared in advanced and either declared globally or passed as parameters to render() function. So - when to use 'hard-coded' HTML, and when to use templates? I am not sure of the best way to go, so I would be interested to hear other developers' views. TIA, Alan

    Read the article

  • How to route all subdomains to a single host using mDNS?

    - by John Mee
    I have a development webserver hosting as "myhost.local" which is found using Bonjour/mDNS. The server is running avahi-daemon. The webserver also wants to handle any subdomains of itself. Eg "cat.myhost.local" and "dog.myhost.local" and "guppy.myhost.local". Given that myhost.local is on a dynamic ip address from dhcp, is there still a way to route all requests for the subdomains to myhost.local? I'm starting to think it not currently possible... http://marc.info/?l=freedesktop-avahi&m=119561596630960&w=2 You can do this with the /etc/avahi/hosts file. Alternatively you can use avahi-publish-host-name. No, he cannot. Since he wants to define an alias, not a new hostname. I.e. he only wants to register an A RR, no reverse PTR RR. But if you stick something into /etc/avahi/hosts then it registers both, and detects a collision if the PTR RR is non-unique, which would be the case for an alias.

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

  • Why can't Doctrine retrieve my model data?

    - by scottm
    So, I'm trying to use Doctrine to retrieve some data. I have some basic code like this: $conn = Doctrine_Manager::connection(CONNECTION_STRING); $site = Doctrine_Core::getTable('Site')->find('00024'); echo $site->SiteName; However, this keeps throwing a SQL error that 'column siteid does not exist'. When I look at the exception the SQL query is this (you can see the error is that the inner_tbl alias for siteid is set to s__siteid, so querying inner_tabl.siteid is what's broken): SELECT TOP 1 [inner_tbl].[siteid] AS [s__siteid] FROM (SELECT TOP 1 [s].[siteid] AS [s__siteid], [s].[name] AS [s__name], [s].[address] AS [s__address], [s].[city] AS [s__city], [s].[zip] AS [s__zip], [s].[state] AS [s__state], [s].[region] AS [s__region], [s].[callprocessor] AS [s__callprocessor], [s].[active] AS [s__active], [s].[dateadded] AS [s__dateadded] FROM [Sites] [s] WHERE ([s].[siteid] = '00024') ) AS [inner_tbl] Why is the query being generated this way? Could it be the way the Yaml schema is laid out? Site: connection: 0 tableName: Sites columns: siteid: type: string(5) fixed: true unsigned: false primary: true autoincrement: false name: type: string(300) fixed: false unsigned: false notnull: true primary: false autoincrement: false address: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false city: type: string(100) fixed: false unsigned: false notnull: false primary: false autoincrement: false zip: type: string(5) fixed: false unsigned: false notnull: false primary: false autoincrement: false state: type: string(2) fixed: true unsigned: false notnull: true primary: false autoincrement: false region: type: integer(4) fixed: false unsigned: false notnull: true default: (5) primary: false autoincrement: false callprocessor: type: integer(4) fixed: false unsigned: false notnull: true primary: false autoincrement: false active: type: integer(1) fixed: false unsigned: false notnull: true primary: false autoincrement: false dateadded: type: timestamp(16) fixed: false unsigned: false notnull: true default: (getdate()) primary: false autoincrement: false

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >