Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 577/1251 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Magento - zend - backend error

    - by user325659
    I get the following error when i am logged into the backend in magento Fatal error: Interface 'Zend_Http_Client_Adapter_Interface' not found in /homepages/45/d210005774/htdocs/websitename/lib/Varien/Http/Adapter/Curl.php on line 176 Also i got this error previously in my index management section in magento Fatal error: Call to undefined method Zend_Locale_Data::disableCache() in /homepages/45/d210005774/htdocs/websitename/lib/Zend/Locale/Format.php on line 153 Could anyone help me out with this? I think the problem is to do with zend framework but i am not sure whats causing this

    Read the article

  • Are there any good books to learn C++ if you already know Java and C#

    - by JF LR
    Hi, I would like to know if you have any good books that teach C++ programming without repeating basic stuff. In fact, I already well know Java and C#. I also have a basic knowledge in C and assembly, so I understand a little bit pointer arithmetic, manual memory management and heap based allocation. I was looking at O'Reilly's C++ in a Nutshell and was also wondering if this book would be a good choice. Thank you

    Read the article

  • Software Architecture: Unit of Work design pattern discussion

    - by santiagobasulto
    Hey everybody. According Martin Fowler's Unit of Work description: "Maintains a list of objects that are affected by a business transaction and coordinates the writing out of changes and resolution of concurrency problems." Avoiding very small calls to the database, which ends up being very slow I'm wondering. If we just delimit it to database transaction management, won't prepare statements help with this?

    Read the article

  • How can I programmatically drop a SQL Server database from .NET code

    - by Craig Shearer
    I'm trying to drop a SQL Server database from .NET code. I've tried using the SMO classes but get an exception saying the database is in use. Then I tried executing a query (opening a SqlConnection, executing a SqlCommand), along the lines of: ALTER DATABASE foo SET SINGLE_USER WITH ROLLBACK IMMEDIATE (pause) DROP DATABASE foo But still I get an exception saying the database is in use. How do I do this? (Or, how does SQL Server Management Studio implement the Drop database and close existing connections?)

    Read the article

  • LotusNotes as a datasource for a .net C# site

    - by AlexFreitas
    Is it possible to have a connection to LotusNotes and use it as a data source for a C# project? We use LN for email/calendar :(... and management wants a web page that would interact with the calendar. I think this can all be done within Notes, but I would much rather do it in .net. They also want some very specific functionality, some of which I'm not really sure can even be done in Notes. Cheers, -Alex

    Read the article

  • Piping EOF problems with stdio and C++/Python

    - by yeus
    I got some problems with EOF and stdio. I have no idea what I am doing wrong. When I see an EOF in my program I clear the stdin and next round I try to read in a new line. The problem is: for some reason the getline function immediatly (from the second run always, the first works just as intended) returns an EOF instead of waiting for a new input from py python process... Any idea? alright Here is the code: #include <string> #include <iostream> #include <iomanip> #include <limits> using namespace std; int main(int argc, char **argv) { for (;;) { string buf; if (getline(cin,buf)) { if (buf=="q") break; /*****///do some stuff with input //my actual filter program cout<<buf; /*****/ } else { if ((cin.rdstate() & istream::eofbit)!=0)cout<<"eofbit"<<endl; if ((cin.rdstate() & istream::failbit)!=0)cout<<"failbit"<<endl; if ((cin.rdstate() & istream::badbit)!=0)cout<<"badbit"<<endl; if ((cin.rdstate() & istream::goodbit)!=0)cout<<"goodbit"<<endl; cin.clear(); cin.ignore(numeric_limits<streamsize>::max()); //break;//I am not using break, because I //want more input when the parent //process puts data into stdin; } } return 0; } and in python: from subprocess import Popen, PIPE import os from time import sleep proc=Popen(os.getcwd()+"/Pipingtest",stdout=PIPE,stdin=PIPE,stderr=PIPE); while(1): sleep(0.5) print proc.communicate("1 1 1") print "running"

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • SQL Server 2005 Import from Excel

    - by user327045
    I'd like to know what my best option would be to import data from an excel file on a weekly or monthly basis. At first, I thought I would use SSIS, but after much struggle with seemingly simple tasks, I'm starting to rethink my plan. Would it be better/easier to just write the SQL by hand or use the services of an SSIS package? The basic process will be as follows: A separate process will download an .xls file to a local fileshare. The xls file will have a filename like: 'myfilename MON YY'. I will need to read the month and year from the the filename, reformat it to a sql date and then query a DimDate table to find the corresponding date key. For each row (after the first 2 header rows), insert the data with the date key, unless the row is a total row, then ignore. Here are some of the issues I've been encountering with SSIS: I can parse the date string from a flat file datasource, but can't seem to do it with an excel data source. Also, once parsed, i cannot seem to convert the string to a date in order to perform the lookup for the date key. For example, I want to do something like this: select DateKey from DimDate where ActualDate = convert(datetime, '01-' + 'JAN-10', 120) but i don't think it is possible to use the 'convert' or 'datetime' keywords in an expression builder. I have been also unable to find where I can edit the SQL to ignore the first 2 rows of data. I'm very skeptical of using SSIS because it seems like a Kludgy way of doing something that can probably be accomplished more efficiently writing the SQL yourself, but I may be forced to use SSIS. Thoughts?

    Read the article

  • How to upload files to server using JSP/Servlet?

    - by Thang Pham
    How can I upload files to server using JSP/Servlet? I tried this: <form action="upload" method="post"> <input type="text" name="description" /> <input type="file" name="file" /> <input type="submit" /> </form> However, I only get the file name, not the file content. When I add enctype="multipart/form-data" to the <form>, then request.getParameter() returns null. During research I stumbled upon Apache Common FileUpload. I tried this: FileItemFactory factory = new DiskFileItemFactory(); ServletFileUpload upload = new ServletFileUpload(factory); List items = upload.parseRequest(request); // This line is where it died. Unfortunately, the servlet threw an exception without a clear message and cause. Here is the stacktrace: SEVERE: Servlet.service() for servlet UploadServlet threw exception javax.servlet.ServletException: Servlet execution threw an exception at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:313) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Is django orm & templates thread safe?

    - by Piotr Czapla
    I'm using django orm and templates to create a background service that is ran as management command. Do you know if django is thread safe? I'd like to use threads to speed up processing. The processing is blocked by I/O not CPU so I don't care about performance hit caused by GIL.

    Read the article

  • gen_server with a dict vs mnesia table vs ets

    - by pablo
    Hi, I'm building an erlang server. Users sends http requests to the server to update their status. The http request process on the server saves the user status message in memory. Every minute the server sends all messages to a remote server and clear the memory. If a user update his status several times in a minute, the last message overrides the previous one. It is important that between reading all the messages and clearing them no other process will be able to write a status message. What is the best way to implement it? gen_server with a dict. The key will be the userid. dict:store/3 will update or create the status. The gen_server solves the 'transaction' issue. mnesia table with ram_copies. Handle transactions and I don't need to implement a gen_server. Is there too much overhead with this solution? ETS table which is more light weight and have a gen_server. Is it possible to do the transaction in ETS? To lock the table between reading all the messages and clearing them? Thanks

    Read the article

  • Visual Studio Unit Test failure to start

    - by swmi
    Hi, I am having an issue when starting the tests under debug mode in Visual Studio 2008 Team Test where it gives the following error: "Failed to queue test run '{user@machinename}': Object reference not set to an instance of an object." I googled for the error but no joy. Don't even understand what it means as it is too brief. Has anyone come across this? Note that I can run tests fine if I am not debugging and I get the same error irrespective of the test I run. Thank you, Swati ETA: Being new to Visual Studio Team Test, I didn't know there was a better exception log then what I was seeing. Anyhow, here it is: <Exception> System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. ShowToolWindow [T](T&amp; toolWindow, String errorMessage, Boolean show) at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. OpenTestResultsToolWindow() at Microsoft.VisualStudio.TestTools.TestCaseManagement.SolutionIntegrationManager. DebugTarget(DebugInfo debugInfo, Boolean prepareEnvironment) at Microsoft.VisualStudio.TestTools.TestManagement.DebugProcessLauncher.Launch( String exeFileName, String args, String workingDir, EventHandler processExitedHandler, Process&amp; process) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.StartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.RestartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.PrepareProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy. InitializeController(TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.QueueTestRunWorker( Object state) </Exception>

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • What VC++ compiler/linker does when building a C++ project with Managed Extension

    - by ???
    The initial problem is that I tried to rebuild a C++ project with debug symbols and copied it to test machine, The output of the project is external COM server(.exe file). When calling the COM interface function, there's a RPC call failre: COMException(0x800706BE): The remote procedure call failed. According to the COM HRESULT design, if the FACILITY code is 7, it's actually a WIN32 error, and the win32 error code is 0x6BE, which is the above mentioned "remote procedure call failed". All I do is replace the COM server .exe file, the origin file works well. When I checked into the project, I found it's a C++ project with Managed Extension. When I checking the DLL with reflector, it shows there's 2 additional .NET assembly reference. Then I checked the project setting and found nothing about the extra 2 assembly reference. I turned on the show includes option of compiler and verbose library of linker, and try to analyze whether the assembly is indirectly referenced via .h file. I've collect all the .h file and grep all the files with '#using' '#import' and the assembly file itself. There really is a '#using ' in one of the .h file but not-relevant to the referenced assembly. And about the linked .lib library files, only one of the .lib file is a side-product of another managed-extension-enabled C++ project, all others are produced by a pure, traditional C++ project. For the managed-extension-enabled C++ project, I checked the output DLL assembly, it did NOT reference to the 2 assembly. I even try to capture the access of the additional assembly file via sysinternal's filemon and procmon, but the rebuild process does NOT access these file. I'm very confused about the compile and linking process model of a VC++/CLI project, where the additional assembly reference slipped into the final assembly? Thanks in advance for any of your help.

    Read the article

  • Bootstrap - Typehead on multiple inputs

    - by Clem
    I have two text intputs, both have to run an autocompletion. The site is using Bootstrap, and the « typeahead » component. I have this HTML : <input type="text" class="js_typeahead" data-role="artist" /> <input type="text" class="js_typeahead" data-role="location" /> I'm using the « data-role » attribute (that is sent to the Ajax controller as a $_POST index), in order to determine what kind of data has to be retrieved from the database. The Javascript goes this way : var myTypeahead = $('input.js_typeahead').typeahead({ source: function(query, process){ var data_role; data_role = myTypeahead.attr('data-role'); return $.post('/ajax/typeahead', { query:query,data_role:data_role },function(data){ return process(data.options); }); } }); With PHP, I check what $_POST['data-role'] contains, an run the MySQL query (in this case, a query either on a list of Artists, or a list of Locations). But the problem is the second "typeahead" returns the same values than the first one (list of Artists). I assume it's because the listener is attached to the object « myTypeahead », and this way the "data-role" attribute which is used, will always be the same. I think I could fix it by using something like : data_role = $(this).attr('data-role'); But of course this doesn't work, as it's a different scope. Maybe I'm doing it all wrong, but at least maybe you people could give me a hint. Sorry if this has already been discussed, I actually searched but without success. Thanks in advance, Clem (from France, sorry for my english)

    Read the article

  • Lightweight CMS in PHP

    - by David
    Hi, I am building a site which will require some very limited content management for a client. There are only a few areas of the site which will require the client to be able to update the content themselves. Would it be better to create a very simple custom admin page for the client to log in and say add a news story etc or would it be best using a fully fledged CMS like Drupal etc which seems overkill to me.

    Read the article

  • Is it possible to make a subclass of NSObject which support subnodes in IB for iPhone project?

    - by Eonil
    I'm making a custom UI element class for iPhone. It'll cool to edit my class on Interface Builder with hierarchy. Some of my class is management class like UINavigationController, but they're not one of them, subclasses from NSObject. Of course, I can place a NSObject instance on IB, but it cannot have a child node. Is there a way to enable adding child node to subclass of NSObject?

    Read the article

  • Determine the folder of a SAS source file

    - by exhuma
    When I open a SAS file in enterprise guide and run it, it is executed on the server. The source file itself is located either on the production site or the development site. In both cases, it is executed the same server however. I want to be able to tell my script to store results in a relative folder. But if I write something like libname lib_out xport "..\tmp\foobar.xpt"; I get an error, because the working folder of the SAS Enterprise Guide process is not the location of my source file, but a folder on the server. And the folder ..\tmp does not exist there. Even if it would, the server process does not have write permission in that folder. I would like to determine from which folder the .sas file was loaded and set the working folder accordingly. In one case it's S:\Development\myproject\sas\foobar.sas and in the other case it's S:\Production\myproject\sas\foobar.sas It this possible at all? Or how would you do this?

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • how to upgrade compact framework applications?

    - by Derick Bailey
    i'm looking for a way to manage application upgrades for my compact framework app. let's say i have v1 of the app installed on my device, and v1.1 has been released. I want the app to make a call to my server to see if there is a new version. since a new version is found, i want to send down the new version of the app to the device and have it installed, replacing the old version. my first thought was just to have the app download the .cab file and kick off the cab file just before exiting the app. this would mostly get the job done but it would prompt the user to pick the installation location if they have a storage card or other partitions on their device. i would like to prevent any user input and just have the new version of the app installed, replacing the old app. i'm certain that there are others doing this already and i don't want to reinvent the wheel, here. what application management tools and systems exist for this type of process? how can I facilitate this type of process?

    Read the article

  • new operator in DllMain of MFC Extension Dll

    - by Picaro De Vosio
    Hi, Dll best practices document from Microsoft available Here recommends avoiding use of memory management function from the dynamic C Run-Time (CRT) within DllMain. But DllMain function of MFC Extension DLL is dynamically allocating the memory for CDynLinkLibrary in the code snippet available at MSDN "http://msdn.microsoft.com/en-us/library/1btd5ea3%28v=VS.80%29.aspx". Is it a violation of Dll Best Practices or ok to use in MFC extension DLL? thanks

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >