Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 323/880 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Get type of the parameter from list of objects, templates, C++

    - by CrocodileDundee
    This question follows to my previous question Get type of the parameter, templates, C++ There is the following data structure: Object1.h template <class T> class Object1 { private: T a1; T a2; public: T getA1() {return a1;} typedef T type; }; Object2.h template <class T> class Object2: public Object1 <T> { private: T b1; T b2; public: T getB1() {return b1;} } List.h template <typename Item> struct TList { typedef std::vector <Item> Type; }; template <typename Item> class List { private: typename TList <Item>::Type items; }; Is there any way how to get type T of an object from the list of objects (i.e. Object is not a direct parameter of the function but a template parameter)? template <class Object> void process (List <Object> *objects) { typename Object::type a1 = objects[0].getA1(); // g++ error: 'Object1<double>*' is not a class, struct, or union type } But his construction works (i.e. Object represents a parameter of the function) template <class Object> void process (Object *o1) { typename Object::type a1 = o1.getA1(); // OK }

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • Do I lose the benefits of macro recording if I develop Excel apps in Visual Studio?

    - by DanM
    I've written lots of Excel macros in the past using the following development process: Record a macro. Open the VBA editor. Edit the macro. I'm now experimenting with a Visual Studio 2008 "Excel 2007 Add-In" project (C#), and I'm wondering if I will have to give up this development process. Questions: I know I can still record macros using Excel, but is there any way to access the resulting code in Visual Studio? Or do I just have to copy and paste then C#-ize it? What happens with my "Personal Macro Workbook"? Can I use the macros I have stored in there within C#? Or is there some way to convert them to C#? If there is some support for opening and editing VBA macros in Visual Studio, can you provide a very brief summary of how it works or point me to a good reference? Do you have any other tips for transitioning from writing macros in VBA using Excel's built-in editor to writing them in C# with Visual Studio?

    Read the article

  • Resources for setting up a Visual Studio/C++ development environment

    - by Tom H.
    I haven't done much "front-end" development in about 15 years since moving to database development. I'm planning to start work on a personal project using C++ and since I already have MSDN I'll probably end up doing it in Visual Studio 2010. I'm thinking about using Subversion as a version control system eventually. Of course, I'd like to get up and running as quickly as I can, but I'd also like to avoid any pitfalls from a poorly organized project environment. So, my question is, are there any good resources with common best practices for setting up a development environment? I'm thinking along the lines of where to break down a solution into multiple projects if necessary, how to set up a unit testing process, organizing resources, directories, etc. Are there any great add-ons that I should make sure I have set up from the start? Most tutorials just have one simple project, type in your code and click on build to see that your new application says, "Hello World!". This will be a Windows application with several DLLs as well (no web development), so there doesn't need to be a deploy to a web server kind of process. Mostly I just want to make sure that I don't miss anything big and then have to extensively refactor because of it. Thanks!

    Read the article

  • Any techniques to interrupt, kill, or otherwise unwind (releasing synchronization locks) a single de

    - by gojomo
    I have a long-running process where, due to a bug, a trivial/expendable thread is deadlocked with a thread which I would like to continue, so that it can perform some final reporting that would be hard to reproduce in another way. Of course, fixing the bug for future runs is the proper ultimate resolution. Of course, any such forced interrupt/kill/stop of any thread is inherently unsafe and likely to cause other unpredictable inconsistencies. (I'm familiar with all the standard warnings and the reasons for them.) But still, since the only alternative is to kill the JVM process and go through a more lengthy procedure which would result in a less-complete final report, messy/deprecated/dangerous/risky/one-time techniques are exactly what I'd like to try. The JVM is Sun's 1.6.0_16 64-bit on Ubuntu, and the expendable thread is waiting-to-lock an object monitor. Can an OS signal directed to an exact thread create an InterruptedException in the expendable thread? Could attaching with gdb, and directly tampering with JVM data or calling JVM procedures allow a forced-release of the object monitor held by the expendable thread? Would a Thread.interrupt() from another thread generate a InterruptedException from the waiting-to-lock frame? (With some effort, I can inject an arbitrary beanshell script into the running system.) Can the deprecated Thread.stop() be sent via JMX or any other remote-injection method? Any ideas appreciated, the more 'dangerous', the better! And, if your suggestion has worked in personal experience in a similar situation, the best!

    Read the article

  • Android: How to periodically check current location without draining the battery

    - by uyahalom
    I have a background service which works periodically by timer.scheduleAtFixedRate. It wakes up every amount of time (let's say 60 seconds for example) and checks for the location. The location is checked by locManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 60000, 5, listener); and the actual location is collected from the listener's onLocationChanged. Now, when the phone is outside and GPS reception is good, this works fine. But, if the phone is inside, the GPS is almost always active - looking for a signal, and the battery is drained rapidly. I created another thread using a Handler and a Runnable in order to conrol the GPS active time accurately: I used locManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, listener); and locManager.removeUpdates(listener); so I can open and close the GPS as I want. In this case, I can open the GPS for the exact amount of time, but found out that it doesn't lock in an area with good reception even after 10 seconds. So here I'm draining the battery again... I'm using API level 7, hence I cannot use locationManager.requestSingleUpdate. I have two questions: Is there any way to optimize this process? Will upgrading to API level 9 (and use locationManager.requestSingleUpdate) improve the process significantly? I mean, does it worth upgrading?

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Need a Java based interruptible timer thread

    - by LambeauLeap
    I have a Main Program which is running a script on the target device(smart phone) and in a while loop waiting for stdout messages. However in this particular case, some of the heartbeat messages on the stdout could be spaced almost 45secs to a 1minute apart. something like: stream = device.runProgram(RESTORE_LOGS, new String[] {}); stream.flush(); String line = stream.readLine(); while (line.compareTo("") != 0) { reporter.commentOnJob(jobId, line); line = stream.readLine(); } So, I want to be a able to start a new interruptible thread after reading line from stdout with a required a sleep window. Upon being able to read a new line, I want to be able to interrupt/stop(having trouble killing the process), handle the newline of stdout text and restart a process. And it the event I am not able to read a line within the timer window(say 45secs) I want to a way to get out of my while loop either. I already tried the thread.run, thread.interrupt approach. But having trouble killing and starting a new thread. Is this the best way out or am I missing something obvious?

    Read the article

  • calling plugin inside ajax call function problem

    - by zurna
    I pull categories of one section from an XML. Problem is, pulled categories need to recognize tab plugin so I tried to add tab plugin to CategoryName variable. But it is not working. I get the following error. Error: CategoryName.find is not a function Source File: http://www.refinethetaste.com/FLPM/ Line: 23 Test website: http://www.refinethetaste.com/FLPM/ $.ajax({ dataType: "xml", url: "/FLPM/content/home/index.cs.asp?Process=ViewVCategories", success: function(xml) { $(xml).find('row').each(function(){ var id = $(this).attr('id'); var CategoryName = $(this).find('CategoryName').text(); $("<div class='tab fleft'><a href='http://www.refinethetaste.com/FLPM/content/home/index.cs.asp?Process=ViewVideos&CATEGORYID="+ id +"'>"+ CategoryName + "</a></div>").appendTo("#VCategories"); CategoryName.find("div.row-title .red").tabs("div.panes > div"); }); } }); pulled categories displayed here: <div class="row-title clear" id="VCategories"> categories xml <rows> - <row id="1"> <CategoryName>Nation</CategoryName> </row> - <row id="2"> <CategoryName>Politics</CategoryName> </row> - <row id="3"> <CategoryName>Health</CategoryName> </row> - <row id="4"> <CategoryName>Business</CategoryName> </row> - <row id="5"> <CategoryName>Culture</CategoryName> </row> </rows> </div>

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • How to ignore the validation of Unknown tags ?

    - by infant programmer
    One more challenge to the XSD capability,I have been sending XML files by my clients, which will be having 0 or more undefined or [call] unexpected tags (May appear in hierarchy). Well they are redundant tags for me .. so I have got to ignore their presence, but along with them there are some set of tags which are required to be validated. This is a sample XML: <root> <undefined_1>one</undefined_1> <undefined_2>two</undefined_2> <node>to_be_validated</node> <undefined_3>two</undefined_3> <undefined_4>two</undefined_4> </root> And the XSD I tried with: <xs:element name="root" type="root"></xs:element> <xs:complexType name="root"> <xs:sequence> <xs:any maxOccurs="2" minOccurs="0"/> <xs:element name="node" type="xs:string"/> <xs:any maxOccurs="2" minOccurs="0"/> </xs:sequence> </xs:complexType XSD doesn't allow this, due to certain reasons. The above mentioned example is just a sample. The practical XML comes with the complex hierarchy of XML tags .. Kindly let me know if you can get a hack of it. By the way, The alternative solution is to insert XSL-transformation, before validation process. Well, I am avoiding it because I need to change the .Net code which triggers validation process, which is supported at the least by my company.

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • Windows 7: Dual Monitor Issue

    - by sdoca
    I have a Dell laptop with a docking station and two external monitors hooked up. I'm running Windows 7 64 bit. Originally, the two monitors were the same - Dell 1908FP. I have replaced one of them with an HP LA2405. I have set the HP 24" (1920 x 1200 connected via DVI port) as montior 1 and extended to use the Dell 19" (12080 x 1024 connected via VGA port) as monitor 2. I have set my power plan so that when the laptop is plugged in, it will turn the monitors off after 10 minutes, but it will not put my laptop to sleep. However, now that I have the HP monitor, when I unlock my laptop and the monitors come back on, all my windows are resized and shifted to the top left corner of the HP monitor as if they had been resized for display on the smaller Dell monitor. A co-worker has the exact same laptop and monitor configuration but doesn't experience this issue, so I figure there's some configuration that's different but we can't find it. I haven't been able to find any mention of a similar problem doing an internet search, but I'm not really sure what terms to use in my search. Anybody have any suggestions as to what may be causing the issue? OS setting? Monitor setting? EDIT: My laptop is using the Intel Graphics and Media Control Panel/drivers.

    Read the article

  • servlet connection to DB

    - by underW
    Initially, after reading books on the subject, I firmly believed that the algorithm for working with a database from a servlet is as follows: create a connection - connect to the database - form a request - send the request to the database - get the query results - process them - close connection - OK. Now, with a better understanding of the practical side, I realized that nobody does it that way, and everything happens through a connection pool according to the following algorithm: initialize the servlet - create a connection pool - a request comes from a user - take a free connection from the pool - form a request - send the request to the database - get the query results - process them - return the connection back to the pool - ok. Now I have this problem: We have 100 users, they are divided into 10 groups, each group has it's own username and password to connect to the database. Moreover, each group may have different rights to the database. How am I supposed to use a connection pool in this situation? If I understand correctly, a pool is nothing more than just a group of similar connections with a single login and password. And here I have 10 pairs of username / password. It looks like I cannot use the pool in this situation. What should I do?

    Read the article

  • Clean installation of RHEL 5.5 claims package "desktops" is missing

    - by TKguru42
    Hi all, I'm a student worker in the CS department of my university, so please forgive me for any unprofessional descriptions. Simplified explanations are appreciated. I recently replaced some bad graphics cards in a few public workstations. The machines are all the same model. Before putting them back on the network I did fresh installs of RHEL---first I tried 5.4, but yum update ran into all sorts of ugly dependency errors and if I tried to remove any of the problematic packages, the whole operating system FUBAR'd. Using RHEL 5.5 gave me the same errors during install saying that "java.1.5.1-sun*" and "desktops" were missing, but yum update didn't have any dependency problems. Now that I tried logging in through the GUI, I encounter no GUI past the standard RHEL login page. The desktop is a uniform light teal and there's no system tray. An xclock window and an xterm window are open, and Firefox opens automatically, but that's it. Nothing else. What's REALLY confusing is that the computer claims that gnome is already installed, except it clearly isn't working. Any help or advice is greatly appreciated. If it helps, our department uses kickstart to run our standard Linux installs. I can try to get the script if that would be of use. Thank you!

    Read the article

  • How to animate button text? (loading type of animation) - jquery?

    - by AL
    I got something that I want to do and want to see what you guys think and how can it be implemented. I got a form in a page and the form will submit to itself to perform some process (run functions) in a class. Upon clicking the submit button, I want to animate the button text to “SUBMIT .” -> “SUBMIT ..” -> “SUBMIT …” -> “SUBMIT ….” -> “SUBMIT ….” and then back again. Sort of “animate” the button text. Once the whole process is done, the button text will goes back to be “SUBMIT” text again. Please note that I am not looking for image button solution, meaning that I do not want to implement gif animated button image. Anyone done this before or know how this can be done? I have google but seems nothing of this kind to be found. Thanks! AL

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • How to get IE to open JavaScript as text

    - by Pete
    I am running IE 9. Up until last week sometime, if I would put the URL of a JavaScript file in the address bar, it would show the JavaScript as text in the browser window. Now when I do that, it wants to download the JavaScript file. How can I revert it to the previous handling? This is annoying since I'm developing a web application and if I can get it to display the .js files as text in the browser, then I can refresh it to force the cache to update. Update: I've tested on several co-workers machines. For some, browsing to .js files renders them in the browser (IE 9 in all cases). In others, it asks for a download. File associations don't seem to have any effect. One co-worker we tested with IE and Chrome. IE wanted to download it, but Chrome rendered it as text. This makes me think it's an IE issue and not an OS issue.

    Read the article

  • How change encoding ?

    - by simply denis
    I need convert or set encoding windows-1251 Process p = new Process(); StreamWriter sw; StreamReader sr; StreamReader err; ProcessStartInfo psI = new ProcessStartInfo("cmd"); psI.UseShellExecute = false; psI.RedirectStandardInput = true; psI.RedirectStandardOutput = true; psI.RedirectStandardError = true; psI.CreateNoWindow = true; p.StartInfo = psI; p.Start(); sw = p.StandardInput; sr = p.StandardOutput; err = p.StandardError; sw.AutoFlush = true; if (tbComm.Text != "") sw.WriteLine(tbComm.Text); else //execute default command sw.WriteLine("dir \\"); sw.Close(); textBox1.Text = sr.ReadToEnd();// this not support russian word. I need convert or set encoding windows-1251 textBox1.Text += err.ReadToEnd();

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • How to optimalize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • Need a design approach or suggestion for a simple structure using Servlet.

    - by akshay
    Hi I have to design such that whenever user pass a query I process it using servlet and then call the js page to draw the chart 1 user writes a query on a page 2 the page call the servelt class public class MyServlet extends Httpservlet implements DataSourceServlet {..... return data The user see a beautiful string like this.. google.visualization.Query.setResponse......... /Tiger'},{v:80.0}, {v:false}]}]}}); 3 when the user hits on different html page myhtml.js it draws the chart. I want the Myservlet class itself call the myhtml.js page and draw the chart directly. and want to eliminate the beautiful string google.visualization.Query.setResponse......... /Tiger'},{v:80.0}, {v:false}]}]}}); from coming on user's browser What should i do? I tried using functions to call another page like request dispatcher(), redirect() calling myhtml.js page directly after myservlet process the query results. But i get the result like this google.visualization.Query.setResponse......... /Tiger'},{v:80.0}, {v:false}]}]}}); and the entire myhtml.js code page below it on the browsers that to without the chart been draw. Is there anyway to element the beautiful string from coming on clients browser and only show them the chart been drawn ? :) This is the small tutorial i am following http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html

    Read the article

  • Salesforce/PHP - outbound messages (SOAP) - memory limit issue? DOMDocument::loadXML() issue?

    - by Phill Pafford
    I'm using Salesforce to send outbound messages (via SOAP) to another server. The server can process about 8 messages at a time, but will not send back the ACK file if the SOAP request contains more than 8 messages. SF can send up to 100 outbound messages in 1 SOAP request and I think this is causing a memory issue with PHP. If I process the outbound messages 1 by 1 they all go through fine, I can even do 8 at a time with no issues. But larger sets are not working. ERROR in SF: org.xml.sax.SAXParseException: Premature end of file Looking in the HTTP error logs I see that the incoming SOAP message looks to be getting cut of which throws a PHP warning stating: DOMDocument::loadXML() ... Premature end of data in tag ... PHP Fatal error: Call to a member function getAttribute() on a non-object This leads me to believe that PHP is having a memory issue and can not parse the incoming message due to it's size. I was thinking I could just set: ini_set('memory_limit', '64M'); // This has done nothing to fix the problem But would this be the correct approach? Is there a way I could set this to increase with the incoming SOAP request dynamically? UPDATE: Adding some code $data = fopen('php://input','rb'); $headers = getallheaders(); $content_length = $headers['Content-Length']; $buffer_length = 1000; $fread_length = $content_length + $buffer_length; $content = fread($data,$fread_length); /** * Parse values from soap string into DOM XML */ $dom = new DOMDocument(); $dom->loadXML($content); ....

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >