Search Results

Search found 18841 results on 754 pages for 'path finding'.

Page 576/754 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user299415
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

  • simple EJB jar deployed in jboss with its own log4j configuration

    - by user309281
    Hi All I have a simple EJB jar with a stateless session bean, deployed in JBOSS AS 4.2.2, unde r/server/default/deploy. The bean is registered under JNDI tree as viewed from jboss jmx console and I am able to access it through a remote java client outside JBOSS. Inside EJB jar, I have added some logging to be written to a separate log file, using apache log4j jar and log4j.xml. But I am not able to view any of the logs. Also I do not wish to use jboss-log4j.xml, since there will be many other EJBs to be deployed and wish to have separate log4j for each EJB application. Here is my one of the EJB-jar contents: EJB_DS.jar: log4j.xml classes apache log4j jar is added to /server/default/lib path. Kindly highlight if i have missed any points for enabling log4j configuration With Regards, Krishna

    Read the article

  • How to Run NUnit Tests from C# Code

    - by Dror Helper
    I'm trying to write a simple method that receives a file and runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing? Is there some other way to run the test from the code (without using Process.Start)?

    Read the article

  • add text to curved image

    - by miki123
    $config['source_image'] = '/path/to/image/mypic.jpg'; $config['wm_text'] = 'Copyright 2006 - John Doe'; $config['wm_type'] = 'text'; $config['wm_font_path'] = './system/fonts/texb.ttf'; $config['wm_font_size'] = '16'; $config['wm_font_color'] = 'ffffff'; $config['wm_vrt_alignment'] = 'bottom'; $config['wm_hor_alignment'] = 'center'; $config['wm_padding'] = '20'; $this->image_lib->initialize($config); $this->image_lib->watermark(); This is water mark code in php, it is working fine when we add text to curve image like mug image, the letter is not overlap the curved image how can we overcome?

    Read the article

  • How to set the bounce address using System.Net.Mail?

    - by Anthony
    I'm trying to implement the Variable envelope return path (VERP) method to manage email addresses (ie when an email I send bounces back I want it to be sent to a specific email address so that I can update my database to avoid sending emails to that email address in the future). According to this article it is possible to specify the email address a bounce email is sent to. How do you do this in .Net? For example say I ([email protected]) want to send an email to you ([email protected]). If [email protected] doesn't exist anymore I want yourserver to send the bounce email to [email protected]). This way when I receive this bounced email I know that [email protected] is not a valid email address anymore and I can update my database accordingly. In this example, the bounce address would be: [email protected] How do you specify it using System.Net.Mail?

    Read the article

  • In Javascript, by what mechanism does setting an Image src property trigger an image load?

    - by brainjam
    One of the things you learn early on when manipulating a DOM using Javascript is the following pattern: var img = new Image(); // Create new Image object img.onload = function(){ // execute drawImage statements here } img.src = 'myImage.png'; // Set source path As far as I know, in general when you set an object property there are no side effects. So what is the mechanism for triggering an image load? Is it just magic? Or can I use a similar mechanism to implement a class Foo that supports a parallel pattern? var foo = new Foo(); // Create new object foo.barchanged = function(){ // execute something after side effect has completed } foo.bar = 'whatever'; // Assign something to 'bar' property I'm vaguely aware of Javascript getters and setters. Is this how Image.src triggers a load?

    Read the article

  • awk / sed script to remove text

    - by radman
    Hi, I am currently needed of way to programmatically remove some text from Makefiles that I am dealing with. Now the problem is that (for whatever reason) the makefiles are being generated with link commands of -l<full_path_to_library>/<library_name> when they should be generated with -l<library_name>. So what I need is a script to find all occurrences of -l/ and then remove up to and including the next /. Example of what I'm dealing with -l/home/user/path/to/boost/lib/boost_filesystem I need it to be -lboost_filesystem As could be imagined this is a stop gap measure until I fix the real problem (on the generation side) but in the meantime it would be a great help to me if this could work and I am not too good with my awk and sed. Thanks for any help.

    Read the article

  • Rails: unexpected behavior updating a shared instance

    - by Pascal Lindelauf
    I have a User object, that is related to a Post object via two different association paths: Post --(has_many)-- comments --(belongs to)-- writer (of type User) Post --(belongs to)-- writer (of type User) Say the following hold: user1.name == "Bill" post1.comments[1].writer == user1 post1.writer == user1 Now when I retrieve the post1 and its comments from the database and I update post1.comments[1].writer like so: post1.comments[1].writer.name = "John" I would expect post1.writer to equal "John" too. But it doesn't! It still equals "Bill". So there seems to be some caching going on, but the kind I would not expect. I would expect Rails to be clever enough to load exactly one instance of the user with name "Bill"; instead is appears to load two individual ones: one for each association path. Can someone explain how this works exactly and how I am to handle these types of situations the "Rails way"?

    Read the article

  • How do I get started writing a .Net-wrapper around C++ Omniorb-stubs

    - by Superfisi
    Hello there, my job is to access a CPRBA-server-application from .NET 3.5. After evaluating projets like IIOP.Net (undefined state) and products like VisiBroker (expensive) I'd like to do it "by myself" and write a .Net-Wrapper around C++-Stubs generated my Omniidl (the Omniorb IDL to C++ generator). This means writing some kind of layer of managed code (CLI) around the unmanaged C++ code. My question is: Has anyone experience in this topic? I honestly don't know how to do it the best way. Right now I plan to create a managed class for every unmanaged class, each managed class itself has a member to an instance of the unmanaged class, which is not garbage-collected. Is this the right way to do it or am I on the wrong path? Thanks in advance!

    Read the article

  • Is there a smart web developer language skill combination?

    - by Cryo
    I'm no newbie to programming, but I'm making the move to a career in web development, and I've noticed that so many job postings have different combinations of skill requirements: (PHP, C#, XML, XHTML/CSS, ASP, .NET, jQuery, YUI, Joomla, Ruby, Perl, Python, Java, Javascript... the list goes on.) As of now, I've started learning XHTML, CSS, JavaScript, jQuery, PHP, and mySQL, but with so many combinations, I want to plan ahead to have a marketable combination of skills as early on as possible. Am I on the right path? What is vital for a marketable web programmer's arsenal? Thanks for your thoughts.

    Read the article

  • java jdbc connection to mysql problem

    - by fatnjazzy
    Hi, I am trying to connect to mysql from java web application in eclipse. Connection con = null; try { //DriverManager.registerDriver(new com.mysql.jdbc.Driver()); Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost/db_name","root" ,""); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } catch(Exception e) { System.err.println("Exception: " + e.getMessage()); } finally { try { if(con != null) con.close(); } catch(SQLException e) { System.out.println(e.toString()); } } I am always getting the Exception: com.mysql.jdbc.Driver I have downloaded this jar http://forums.mysql.com/read.php?39,218287,220327 import it to the "java build path/lib" the mysql version is 5.1.3 under. running: mysql 5.1.3 (db is up and running queries form PHP) windows XP java jee Thanks

    Read the article

  • Help translating Reflector deconstruction into compilable code

    - by code poet
    So I am Reflector-ing some framework 2.0 code and end up with the following deconstruction fixed (void* voidRef3 = ((void*) &_someMember)) { ... } This won't compile due to 'The right hand side of a fixed statement assignment may not be a cast expression' I understand that Reflector can only approximate and generally I can see a clear path but this is a bit outside my experience. Question: what is Reflector trying to describe to me? Update: Am also seeing the following fixed (IntPtr* ptrRef3 = ((IntPtr*) &this._someMember)) Update: So, as Mitch says, it is not a bitwise operator, but an addressOf operator. Question is now: fixed (IntPtr* ptrRef3 = &_someMember) fails with an 'Cannot implicitly convert type 'xxx*' to 'System.IntPtr*'. An explicit conversion exists (are you missing a cast?)' compilation error. So I seemed to be damned if I do and damned if I dont. Any ideas?

    Read the article

  • Error: Unable to access jarfile Click-The-Block.jar

    - by AqueousSnake
    I have made a simple game that I want to convert into a runnable jar so I can show others and launch it without Eclipse. In Eclipse I: Right clicked on Project Export Java Exectuable Jar File Launch Configuration: CTB (1) - Click The Block It made a jar with a MANIFEST.MF containing: Manifest-Version: 1.0 Class-Path: . Main-Class: uk.co.robertmerriman.ctb.main.CTB This was all extracted to my desktop in Click-The-Block.jar When I double click, nothing happens. When I type "java -jar Click-The-Block.jar" into CMD, I get the following error: Error: Unable to access jarfile Click-The-Block.jar.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • acquia drupal error after installing via web platform on iis 7.5

    - by Binder
    I just installed Acquia Drupal using the web platform installer. The entire process went smoothly but when i try to browse the website it say "HTTP Error 404.0 - Not Found The resource you are looking for has been removed, had its name changed, or is temporarily unavailable." Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP_via_FastCGI Error Code 0x00000000 Requested URL http://localhost:8088/index.php Physical Path C:\inetpub\wwwroot\acquia-drupal\index.php Logon Method Anonymous Logon User Anonymous I'm running IIS 7.5 on windows 7. Please help i've been stuck on this since 2 days now.

    Read the article

  • How to Include SVG file as <input> background

    - by eknown
    I'm a newbie to the SVG world, just started experimenting today. I'm trying to create a mobile site where the primary graphics are all scalable, thus supporting all display resolutions. I created an svg file for my input (currently type="image"), and suprisingly the results are as expected in my code editor (Coda). In testing (mobile Safari, DT Safari and DT FF), the input displays broken image path placeholder (the oath is correct because I can right-click to download the file). How do I go about including my SVG file in the (html5) document?

    Read the article

  • Determine branch of origin from bzr blame

    - by Dave Aaron Smith
    I had a complicated change that affected a bunch of files. I don't remember what bazaar branch I wrote that change in. We have a somewhat complicated merge setup, so the branch I'm in now lumps that change in with a lot of other changes. I'd like to do some very similar work so it would be nice to pull the original diff. I feel like I should be able to use bzr blame. I run this command on one of the files bzr blame --long path/to/file and I find one of the pertinent lines and get something like 1107.6.213 dsmith@satie 20091202 | tinyMCE.init({ Can I use that to figure out what branch and revision the original change came from? What do the 6 and 213 stand for?

    Read the article

  • RewriteRule being greedy

    - by lardlad
    I have been looking for an answer for a few hours now, so sorry if this was asked a ton of times, I missed it. I basically want to make a rewrite to ignore the first directory. That first dir in the path will be different so I thought I could use a regex. But my regex is matching all the way to the file name: RewriteRule ^([a-z]+)?/(.+)$ $2 [L] this works if I am one level deep: http://test.domain.com/one/index.php I get the actual index page of the root. Which is what I want. but if I were to go deeper: http://test.domain.com/one/two/index.php I get a message saying /index.php was not found. So it seems my regex is not stopping after the last [a-z]. I appreciate any help. This is Apache2 if that matters at all.

    Read the article

  • Which Happens First? Anyone Know Exactly How The Apache Server Will Handle This Request?

    - by user310594
    Hello, To keep things simple, please allow the "assumption" that some code requires the use of a full URL, even though the domain is on the same server, i.e. a simple file path cannot be used. TCP/IP?? Question: If a form action target = "http://this-full-URL.com/postdata" (for example) and that URL is also on the same server, then which happens first? A) Data is sent "out onto the web", and then returns to the same server, or B) Before sending any (possibly sensitive) data, the server (Linux, Apache, PHP), first "discovers" the target address is local, so (clearly) no data is sent over the net? Thank you.

    Read the article

  • Regular expression problem (PHP)

    - by Marcos
    Hello all. I have a little problem with my regular expression, that I use in PHP. My code identify all tags of my content and add a link in this image. My code is working when I use dinamycally, without any defined image. When I try with a imapge path, the code does not work. How can I solve this problem? Working code: $content = preg_replace('/(<img .*?src="(.+?)".*?>)/','<a class="nyromodal foto" href="'.$imagem_wordpress.'">\1</a>', $content); Problem code: $content = preg_replace('/(<img .*?src="ttp://mysite.com/files/2010/04/bac-gallery-site-matters-saline-project1.jpg".*?>)/','<a class="nyromodal foto" href="'.$imagem_wordpress.'">\1</a>', $content);

    Read the article

  • PHP Flatten Array with multiple leaf nodes

    - by tafaju
    What is the best way to flatten an array with multiple leaf nodes so that each full path to leaf is a distinct return? array("Object"=>array("Properties"=>array(1, 2))); to yield Object.Properties.1 Object.Properties.2 I'm able to flatten to Object.Properties.1 but 2 does not get processed with recursive function: function flattenArray($prefix, $array) { $result = array(); foreach ($array as $key => $value) { if (is_array($value)) $result = array_merge($result, flattenArray($prefix . $key . '.', $value)); else $result[$prefix . $key] = $value; } return $result; } I presume top down will not work when anticipating multiple leaf nodes, so either need some type of bottom up processing or a way to copy array for each leaf and process (althought that seems completely inefficient)

    Read the article

  • How do I move Zend Framework From Development to Production?

    - by dirtylogic
    I'm just wondering if anyone else has had problems moving the Zend Framework from development to production. I changed my docroot to the public folder, updated my library path, but it's still not working out for me. The IndexController is working just fine, but my ServiceController is giving me an internal server error. ServiceController <?php class ServiceController extends Zend_Controller_Action { public function amfAction() { require_once APPLICATION_PATH . '/models/MyClass.php'; $srv = new Zend_Amf_Server(); $srv->setClass('Model_MyClass', 'MyClass'); echo $srv->handle(); exit; } }

    Read the article

  • How to know start and kill processes within Java code (or C or Python) on *nix

    - by recipriversexclusion
    I need to write a process controller module on Linux that handles tasks, which are each made up of multiple executables. The input to the controller is an XML file that contains the path to each executable and list of command line parameters to be passed to each. I need to implement the following functionality: Start each executable as an independent Be able to kill any of the created processes independent of the others In order to do (2), I think I need to capture the pid when I create a process, to issue a system kill command. I tried to get access to pid in Java but saw no easy way to do it. All my other logic (putting info about the tasks in DB, etc) is done in Java so I'd like to stick with that, but if there are solutions you can suggest in C, C++, or Python I'd appreciate those, too.

    Read the article

  • Python problem with resize animate GIF

    - by gigimon
    Hello! I'm want to resize animated GIF with save animate. I'm try use PIL and PythonMagickWand (ImageMagick) and with some GIF's get bad frame. When I'm use PIL, it mar frame in read frame. For test, I'm use this code: from PIL import Image im = Image.open('d:/box_opens_closes.gif') im.seek(im.tell()+1) im.seek(im.tell()+1) im.seek(im.tell()+1) im.show() When I'm use MagickWand with this code: wand = NewMagickWand() MagickReadImage(wand, 'd:/Box_opens_closes.gif') MagickSetLastIterator(wand) length = MagickGetIteratorIndex(wand) MagickSetFirstIterator(wand) for i in range(0, length+1): MagickSetIteratorIndex(wand,i) MagickScaleImage(wand, 87, 58) MagickWriteImages(wand, 'path', 1) My GIF where I'm get bad frame this: test gif In GIF editor software, all freme is ok. Where problem? Thx

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >