Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 731/880 | < Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • Opencv application crashes at runtime with error code 0x0000142

    - by Tuan Anh
    I have openCV and minGW installed with codeblock IDE following the instructions found here http://kevinhughes.ca/tutorials/opencv-install-on-windows-with-codeblocks-and-mingw/ i tried the simple image loading program in the article and the build process went fine. but when i tried running the output program, it crashes with the error message "the application was unable to start correctly (0xc0000142). Click OK to close the application." I used Dependency Walker to see if the program failed to load dll module and here's the output screen of Dependency Walker https://www.dropbox.com/s/f9iaftdt8atjwpl/Screenshot%202013-11-05%2022.21.45.png i am not used to DW but as i can see in its output screen, some openCV dll failed to load and the loaded Windows DLL were 64 bit instead of 32 bit (as minGW is 32 bit). I can't figure out why as i already configure the Path environment variable for the bin directory of openCV and the app still can not load the dll modules. And i think that Windows will automatically load the proper 32 bit DLLs when a 32 bit app is run but this situation the app still failed to load. Anyone has ideas?

    Read the article

  • How to continuously scroll content within a DIV using jQuery?

    - by Camsoft
    Aim The aim is to a have a container DIV with a fixed height and width and have the HTML content within that DIV automatically scroll vertically continuously. Question Basically I've created the code below using jQuery to scroll (move) the child DIV vertically upwards until its outside the bounding parent box where the animation then completes which triggers an event handler which resets the position of the child DIV and starts the process again. This works fine, so the content scrolls up leaving a blank space and then starts from the bottom again and scrolls up. The problem I have is that the requirements for this is for the content to appear as if it was continuously repeating, see below diagram to better explain this, is there a way to do this? (I don't want to use 3rd party plug ins or libraries other than jQuery): What I have so far The HTML: <div id="scrollingContainer"> <div class="scroller"> <h1>This is a title</h1> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse at orci mi, id gravida tellus. Integer malesuada ante sit amet enim pulvinar congue. Donec pulvinar dolor et arcu posuere feugiat id et felis.</p> <p>More content....</p> </div> </div> The CSS: #scrollingContainer{ height: 300px; width: 300px; overflow: hidden; } #scrollingContainer DIV.scroller{ position: relative; } The JavaScript: /** * Scrolls the content DIV */ function scroll() { if($('DIV.scroller').height() > $('#scrollingContainer').height()) { var t = $('DIV.scroller').position().top + $('DIV.scroller').height(); /* Animate */ $('DIV.scroller').animate( { top: '-=' + t + 'px' } , 4000, 'linear', animationComplete); } } function animationComplete() { $(this).css('top', $('#scrollingContainer').height()); scroll(); }

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • Entity Framework Update Error in ASP.NET Mvc with related entity

    - by Barry
    I have run into a problem which have searched and tried everything i can to find a solution but to no avail. I am using the same repository and context throughout the process I have a booking entity and a userExtension Entity Below is my image i then get my form collection back from my page and create a new booking public ActionResult Create(FormCollection collection) { Booking toBooking = new Booking(); i then do some validation and property assignment and find an associated BidInstance toBooking.BidInstance = bid; i have checked and the bid is not null. finally i get the user extension file from the Current IPRINCIPAL USER as below UserExtension loggedInUser = m_BookingRepository.GetBookingCurrentUser(User); toBooking.UserExtension = loggedInUser; The Code to do the getUserExtension is : public UserExtension GetBookingCurrentUser(IPrincipal currentUser) { var user = (from u in Context.aspnet_Users .Include("UserExtension") where u.UserName == currentUser.Identity.Name select u).FirstOrDefault(); if (user != null) { var userextension = (from u in Context.UserExtension.Include("aspnet_Users") where u.aspnet_Users.UserId == user.UserId select u).FirstOrDefault(); return userextension; } else{ return null; } } It returns the userextension fine and assigns it fine. i originally used the aspnet_users but encountered this problem so tried to change it to the extension entity. as soon as i call the : Context.AddToBooking(booking); Context.SaveChanges(); i get the following exception and im completely baffled by how to fix it Entities in 'FutureFlyersEntityModel.Booking' participate in the 'FK_Booking_UserExtension' relationship. 0 related 'UserExtension' were found. 1 'UserExtension' is expected. then the final error that comes to the front end is: Metadata information for the relationship 'FutureFlyersModel.FK_Booking_BidInstance' could not be retrieved. Make sure that the EdmRelationshipAttribute for the relationship has been defined in the assembly. Parameter name: relationshipName.. But both the related entities are set in the booking entity passed thruogh PLEASE HELP Im at wits end with this

    Read the article

  • Payapl sandbox a/c in Dotnet..IPN Response Invaild

    - by Sam
    Hi, I am Integrating paypal to mysite.. i use sandbox account,One Buyer a/c and one more for seller a/c...and downloaded the below code from paypal site string strSandbox = "https://www.sandbox.paypal.com/cgi-bin/webscr"; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(strSandbox); //Set values for the request back req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength); string strRequest = Encoding.ASCII.GetString(param); strRequest += "&cmd=_notify-validate"; req.ContentLength = strRequest.Length; //for proxy //WebProxy proxy = new WebProxy(new Uri("http://url:port#")); //req.Proxy = proxy; //Send the request to PayPal and get the response StreamWriter streamOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII); streamOut.Write(strRequest); streamOut.Close(); StreamReader streamIn = new StreamReader(req.GetResponse().GetResponseStream()); string strResponse = streamIn.ReadToEnd(); streamIn.Close(); if (strResponse == "VERIFIED") { //check the payment_status is Completed //check that txn_id has not been previously processed //check that receiver_email is your Primary PayPal email //check that payment_amount/payment_currency are correct //process payment } else if (strResponse == "INVALID") { //log for manual investigation } else { //log response/ipn data for manual investigation } and when add this snippets in pageload event of success page i get the ipn response as INVALID but amount paid successfully but i am getting invalid..any help..Paypal Docs in not Clear. thanks in advance

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • How can I capture Rake output when invoked from with a Ruby script?

    - by Adrian O'Connor
    I am writing a web-based dev-console for Rails development. In one of my controller actions, I am calling Rake, but I am unable to capture any of the output that Rake generates. For example, here is some sample code, from the controller: require 'rake' require 'rake/rdoctask' require 'rake/testtask' require 'tasks/rails' require 'stringio' ... def show_routes @results = capture_stdout { Rake.tasks['routes'].invoke } # @results is nil -- the capture_stdout doesn't catpure anything that Rake generates end def capture_stdout s = StringIO.new $stdout = s yield s.string ensure $stdout = STDOUT end Does anybody know why I can't capture the Rake output? I've tried going through the Rake source, and I can't see where it fires a new process or anything, so I think I ought to be able to do this. Many thanks! Adrian I have since discovered the correct way to call Rake from inside Ruby that works much better: Rake.application['db:migrate:redo'].reenable Rake.application['db:migrate:redo'].invoke Strangely, some rake tasks work perfectly now (routes), some capture the output the first time the run and after that are always blank (db:migrate:redo) and some don't seem to ever capture output (test). Odd.

    Read the article

  • how to write this typical mysql query( ho to use subquery column into main query)

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

  • Want to learn Objective-C but syntax is very confusing

    - by Sahat
    Coming from Java background I am guessing this is expected. I would really love to learn Objective-C and start developing Mac apps, but the syntax is just killing me. For example: -(void) setNumerator: (int) n { numerator = n; } What is that dash for and why is followed by void in parenthesis? I've never seen void in parenthesis in C/C++, Java or C#. Why don't we have a semicolon after (int) n? But we do have it here: -(void) setNumerator: (int) n; And what's with this alloc, init, release process? myFraction = [Fraction alloc]; myFraction = [myFraction init]; [myFraction release]; And why is it [myFraction release]; and not myFraction = [myFraction release]; ? And lastly what's with the @ signs and what's this implementation equivalent in Java? @implementation Fraction @end I am currently reading Programming in Objective C 2.0 and it's just so frustrating learning this new syntax for someone in Java background.

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, can any one tell how to tell svn that these files are to be deleted from repository through command line. i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • Unable to download .apk via webbrowser from drupal site

    - by ggrell
    I have a drupal-based website where people can log in and see private discussion forums. This is where I want to have my beta testers for my Android application download the beta .apk files. I tested this thoroughly on my Android 1.6 based myTouch 3G, and was able to log in, and download files attached to forum posts without problems. Now comes the interesting part: my testers on Droids and Nexus Ones (Android 2.0.1 and 2.1) were complaining that their downloads are failing. Since I don't have an 2.0 phone, I tried it out in a 2.0 emulator, and lo-and-behold, it didn't work. The download shows the indeterminate progress for a second or two, then shows "Download unsuccessful". Based on what I see in the logs, it is apparent that the server is returning a 404 for the download request from 2.0 browsers. I can download to my desktop and 1.6 phone no problem. The only reason I can think of that the server would return a 404 for a request is that for some reason the credentials or cookies aren't being passed by the download process. Logcat shows: http error 404 for download x Some background: I added the mime type to my .htaccess like this: AddType application/vnd.android.package-archive apk I checked the server logs and see the following for failed downloads: xx.xx.xx.224 - - [28/Jan/2010:20:39:00 -0500] "GET /system/files/grandmajong-beta090.apk HTTP/1.1" 404 - "http://trickybits.com/forums/beta-testing/grandma-jong/latest-version-090-b1" "Mozilla/5.0 (Linux; U; Android 1.6; en-us; sdk Build/Donut) AppleWebKit/528.5+ (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1"

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Automatically generating Regex from set of strings residing in DB C#

    - by Muhammad Adeel Zahid
    Hello Everyone i have about 100,000 strings in database and i want to if there is a way to automatically generate regex pattern from these strings. all of them are alphabetic strings and use set of alphabets from English letters. (X,W,V) is not used for example. is there any function or library that can help me achieve this target in C#. Example Strings are KHTK RAZ given these two strings my target is to generate a regex that allows patterns like (k, kh, kht,khtk, r, ra, raz ) case insensitive of course. i have downloaded and used some C# applications that help in generating regex but that is not useful in my scenario because i want a process in which i sequentially read strings from db and add rules to regex so this regex could be reused later in the application or saved on the disk. i m new to regex patterns and don't know if the thing i m asking is even possible or not. if it is not possible please suggest me some alternate approach. Any help and suggestions are highly appreciated. regards Adeel Zahid

    Read the article

  • How do I develop browser plugins with cross-platform and cross-browser compatibility in mind?

    - by Schnapple
    My company currently has a product which relies on a custom, in-house ActiveX control. The technology it employs (TWAIN) is itself cross-platform by design, but our solution is obviously limited to Internet Explorer on Windows. Long term we would like to become cross-browser and cross-platform (i.e., support other browsers on Windows, support the Macintosh or Linux). Obviously if we wanted to support Firefox on Windows I would need to write a plugin for it. But if we wanted to support the Macintosh, how do I attack that? Is it possible to compile a version of the Firefox plugin that runs on the Mac? Would I be remiss to not also support Safari on the Mac? Are there any plugins which are cross-browser on a platform? (i.e., can any browsers run plugins for other browsers) Since TWAIN is so low-level to the operating system, I do not think Java would be a solution in any capacity, but I could be wrong. What do people generally do when they want to support multiple platforms with a process that will need to be cross-platform and cross-browser compatible?

    Read the article

  • Download, pause and resume an download using Indy components

    - by Salvador
    Actually i'm using the TIdHTTP component for download a file from internet. i'm wondering if is possible pause and resume the download using this component o maybe another indy component. this is my current code, this works ok for download a file (without resume), but . now i want pause the download close my app ,and when my app restart then resume the download from the last position saved. var Http: TIdHTTP; MS : TMemoryStream; begin Result:= True; Http := TIdHTTP.Create(nil); MS := TMemoryStream.Create; try try Http.OnWork:= HttpWork;//this event give me the actual progress of the download process Http.Head(Url); FSize := Http.Response.ContentLength; AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes'); Http.Get(Url, MS); MS.SaveToFile(LocalFile); except on E : Exception do Begin Result:=False; AddLog(E.Message); end; end; finally Http.Free; MS.Free; end; end;

    Read the article

  • Safari Extension Questions

    - by Rob Wilkerson
    I'm in the process of building my first Safari extension--a very simple one--but I've run into a couple of problems. The extension boils down to a single, injected script that attempts to bypass the native feed handler and redirect to an http:// URI. My issues so far are twofold: The "whitelist" isn't working the way I'd expect. Since all feeds are shown under the "feed://" protocol, I've tried to capture that in the whitelist as "feed://*/*" (with nothing in the blacklist), but I end up in a request loop that I can't understand. If I set blacklist values of "http://*/*" and "https://*/*", everything works as expected. I can't figure out how to access my settings from my injected script. The script creates a beforeload event handler, but can't access my settings using the safari.extension.settings path indicated in the documentation. I haven't found anything in Apple's documentation to indicate that settings shouldn't be available from my script. Since extensions are such a new feature, even Google returns limited relevant results and most of those are from the official documentation. What am I missing? Thanks.

    Read the article

  • Printing from web pages (reports especially) with greater precision.

    - by Kabeer
    Hello. I am re-engineering a windows application to be ported to web. One area that has been worrying is 'printing'. The application is data intensive and complex reports need to be generated. The erstwhile windows application takes advantage of printer APIs and extends sophisticated control to the users. It supports functions like page break, avoiding printing on printed parts of the sheet (like letterhead), choice of layouts and orientation, etc. Please note that these setting are not done only while printing, they are part of report definition sometimes. From what I know, we cannot have this kind of control while printing web pages. I am in a process of identifying options at my disposal. While I prefer to first look into something that will help me print from raw web pages, following are other thoughts: Since reports can also be exported to .xls & .pdf versions, let user download one and print directly. This however limits my solution to the area of application that have export feature. Use Silverlight (4.0) for report layout definition and print. I think Silverlight 4.0 (in beta right now) provides adequate control over the printer. I have so far been avoiding the need of any RIA plugin. Meticulously generate reports on web with fixed dimensions. I am not sure how far this will go. Please share practices that can be applied easily in my scenario.

    Read the article

  • XML document being parsed as single element instead of sequence of nodes

    - by Rob Carr
    Given xml that looks like this: <Store> <foo> <book> <isbn>123456</isbn> </book> <title>XYZ</title> <checkout>no</checkout> </foo> <bar> <book> <isbn>7890</isbn> </book> <title>XYZ2</title> <checkout>yes</checkout> </bar> </Store> I am getting this as my parsed xmldoc: >>> from xml.dom import minidom >>> xmldoc = minidom.parse('bar.xml') >>> xmldoc.toxml() u'<?xml version="1.0" ?><Store>\n<foo>\n<book>\n<isbn>123456</isbn>\n</book>\n<t itle>XYZ</title>\n<checkout>no</checkout>\n</foo>\n<bar>\n<book>\n<isbn>7890</is bn>\n</book>\n<title>XYZ2</title>\n<checkout>yes</checkout>\n</bar>\n</Store>' Is there an easy way to pre-process this document so that when it is parsed, it isn't parsed as a single xml element?

    Read the article

  • Windows Service Conundrum

    - by Paul Johnson
    All, I have a Custom object which I have written using VB.NET (.net 2.0). The object instantiates its own threading.timer object and carries out a number of background process including periodic interrogation of an oracle database and delivery of emails via smtp according to data detected in the database. The following is the code implemented in the windows service class Public Class IncidentManagerService 'Fakes Private _fakeRepoFactory As IRepoFactory Private _incidentRepo As FakeIncidentRepo Private _incidentDefinitionRepo As FakeIncidentDefinitionRepo Private _incManager As IncidentManager.Session 'Real Private _started As Boolean = False Private _repoFactory As New NHibernateRepoFactory Private _psalertsEventRepo As IPsalertsEventRepo = _repoFactory.GetPsalertsEventRepo() Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. If Not _started Then Startup() _started = True End If End Sub Protected Overrides Sub OnStop() 'Tear down class variables in order to ensure the service stops cleanly _incManager.Dispose() _incidentDefinitionRepo = Nothing _incidentRepo = Nothing _fakeRepoFactory = Nothing _repoFactory = Nothing End Sub Private Sub Startup() Dim incidents As IList(Of Incident) = Nothing Dim incidentFactory As New IncidentFactory incidents = IncidentFactory.GetTwoFakeIncidents _repoFactory = New NHibernateRepoFactory _fakeRepoFactory = New FakeRepoFactory(incidents) _incidentRepo = _fakeRepoFactory.GetIncidentRepo _incidentDefinitionRepo = _fakeRepoFactory.GetIncidentDefinitionRepo 'Start an incident manager session _incManager = New IncidentManager.Session(_incidentRepo, _incidentDefinitionRepo, _psalertsEventRepo) _incManager.Start() End Sub End Class After a little bit of experimentation I arrived at the above code in the OnStart method. All functionality passed testing when deployed from VS2005 on my development PC, however when deployed on a true target machine, the service would not start and responds with the following message: "The service on local computer started and then stopped..." Am I going about this the correct way? If not how can I best implement my incident manager within the confines of the Windows Service class. It seems pointless to implement a timer for the incidentmanager because this already implements its own timer... Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • refactor LINQ TO SQL custom properties that instantiate datacontext

    - by Thiago Silva
    I am working on an existing ASP.NET MVC app that started small and has grown with time to require a good re-architecture and refactoring. One thing that I am struggling with is that we've got partial classes of the L2S entities so we could add some extra properties, but these props create a new data context and query the DB for a subset of data. This would be the equivalent to doing the following in SQL, which is not a very good way to write this query as oppsed to joins: SELECT tbl1.stuff, (SELECT nestedValue FROM tbl2 WHERE tbl2.Foo = tbl1.Bar), tbl1.moreStuff FROM tbl1 so in short here's what we've got in some of our partial entity classes: public partial class Ticket { public StatusUpdate LastStatusUpdate { get { //this static method call returns a new DataContext but needs to be refactored var ctx = OurDataContext.GetContext(); var su = Compiled_Query_GetLastUpdate(ctx, this.TicketId); return su; } } } We've got some functions that create a compiled query, but the issue is that we also have some DataLoadOptions defined in the DataContext, and because we instantiate a new datacontext for getting these nested property, we get an exception "Compiled Queries across DataContexts with different LoadOptions not supported" . The first DataContext is coming from a DataContextFactory that we implemented with the refactorings, but this second one is just hanging off the entity property getter. We're implementing the Repository pattern in the refactoring process, so we must stop doing stuff like the above. Does anyone know of a good way to address this issue?

    Read the article

< Previous Page | 727 728 729 730 731 732 733 734 735 736 737 738  | Next Page >