Search Results

Search found 54055 results on 2163 pages for 'multiple files'.

Page 172/2163 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Multiple ajax request and progress bar

    - by hunt
    Hi, In a following piece of code i am create a progress bar and showing its progress as the ajax request get processed. i am faking the progress shown here just by adding 5 in to cnt counter variable after that i made a check when counter reach to 90. at this point if the request is not executed successfully then i will pause/disable the progress bar and whenever response come i will complete the whole progress bar with 100. now the problem is i want to add multiple progress bar as i am firing multiple ajax request. so following is the code to implement only for one request and one progress bar but i want it for more than one. as global variables are used over here for checking response and timer id so i don't know how well i can handle it for multiple request var cnt=0; var res=null; function getProgress(data){ res=data; } var i =0; $('#start').click(function(){ i = setInterval(function() { if(res!=null) { clearInterval(i); $("#pb1").progressbar( "option", "value", cnt=cnt+100 ); } var value = $("#pb1").progressbar("option", "value"); if(value >=90 && res==null){ $("#pb1").progressbar("option", "disable"); } else{ $("#pb1").progressbar( "option", "value", cnt=cnt+5 ); } },2500); $.ajax({ url: 'http://localhost/beta/demo.php', success: getProgress }); }); $("#pb1").progressbar({ value: 0 , change: function(event, ui) { if(res!=null) clearInterval(i); } });

    Read the article

  • Accessing mapped network drive from ColdFusion

    - by Kip
    I am having a problem accessing a mapped drive in ColdFusion. I have \\server\files\sharing mapped to z:\. If I run this code, it says the directory exists for the full path but not for the mapped one: <cfscript> fullPath = "\\server\files\sharing\reports"; mappedPath = "z:\reports"; WriteOutput("fullPath exists: #DirectoryExists(fullPath)#<br/>"); //YES WriteOutput("mappedPath exists: #DirectoryExists(mappedPath)#"); //NO </cfscript> I have done some Googling and have found a few people with the same problem, but the solution was always to use the full path. Is there a reason ColdFusion wouldn't be able to see or access the mapped drive? And if so, are there any workarounds (maybe a system call to get the full path of the mapped drive)?

    Read the article

  • iPhone / NSArray : How do I format a text file to be read in using arrayWithContentsOfFile

    - by nickthedude
    I have several large word lists and I had been loading them in place in the code as follows: NSArray *dict3 = [[NSArray alloc] initWithObjects:@"abled",@"about",@"above",@"absurd",@"absurdity", ... but now it is actually causing an exc_bad_access error weirdly, i know it seems implausible but for some reason IT IS causing it. (trust me on this I just spend the better part of a day debugging this crap) Anyway, I'm looking to get these humongous lines of code out of my files but I'm not sure what the best approach is. I guess I could do a plist but I need to figure out how to automate the process. it be easiest if I could just use the text files I've been compiling so if anyone knows how I can format the text file so that the myArray = [NSArray arrayWithContentsOfFile: @"myTextFile.txt"]; will be read in correctly as one word per element in the array it would really be appreciated. Thanks, Nick

    Read the article

  • SSIS process files from folder

    - by RT
    Background: I've a folder that gets pumped with files continuously. My SSIS package needs to process the files and delete them. The SSIS package is scheduled to run once every minute. I'm picking up the files in ascending order of file creation time. I'm building an array of files and then processing-deleting them one at a time. Problem: If an instance of my package takes longer than one minute to run, the next instance of the SSIS package will pick up some of the files the previous instance has in its buffer. By the time the second instance of teh package gets around to processing a file, it may already have been deleted by the first instance, creating an exception condition. I was wondering whether there was a way to avoid the exception condition. Thanks.

    Read the article

  • problems selecting a mutliple select value from database in Rails

    - by Ramy
    From inside of a form_for in rails, I'm inserting multiple select values into the database, like this: <div class="new-partner-form"> <%= form_for [:admin, matching_profile.partner, matching_profile], :html => {:id => "edit_profile", :multipart => true} do |f| %> <%= f.submit "Submit", :class => "hidden" %> <div class="rounded-block quarter-wide radio-group"> <h4>Exclude customers from source:</h4> <%= f.select :source, User.select(:source).group(:source).order(:source).map {|u| [u.source,u.source]}, {:include_blank => false}, {:multiple => true} %> <%= f.error_message_on :source %> </div> I'm then trying to pull the value from the database like this: def does_not_contain_source(matching_profiles) Expression.select(matching_profiles, :source) do |keyword| Rails.logger.info("Keyword is : " + keyword) @customer_source_tokenizer ||= Tokenizer.new(User.select(:source).where("id = ?", self.owner_id).map {|u| u.source}[0]) #User.select("source").where("id = ?", self.owner_id).to_s) @customer_source_tokenizer.divergent?(keyword) end end but getting this: ExpressionErrors: Bad syntax: --- - "" - B - "" this is what the value is in the database but it seems to choke when i access it this way. What's the right way to do this?

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • HSQLDB and in-memory files

    - by lewap
    Is it possible to setup HSQLDB in a way, so that the files with the db information are written into memory instead of using actual files? I want to use hsqldb to export some data structures together with hibernate mappings. Is is, however, not possible to write temporary files, so that I need to generate the files in-memory and return a stream with their contents as a response. Setting hsqldb to use nio seems not to be a solution, because there is no way to get hold of those files before they get written onto the filesystem. What I'm thinking of is a protocol handler for hsqldb, but I didn't find a suitable solution yet. Just to describe in other words: A hack solution would be to pass hsqldb a stream or several streams. It would then during its operation write data into those streams. After all data is written, the user of the db could then use those streams to send it back over the network.

    Read the article

  • Parse multiple named command line parameters

    - by scholzr
    I need to add the ability to a program to accept multiple named parameters when opening the program via the command line. i.e. program.exe /param1=value /param2=value and then be able to utilize these parameters as variables in the program. I have found a couple of ways to accomplish pieces of this, but can't seem to figure out how to put it all together. I have been able to pass one named parameter and recover it using the code below, and while I could duplicate it for every possible named parameter, I know that can't be the preffered way to do this. Dim inputArgument As String = "/input=" Dim inputName As String = "" For Each s As String In My.Application.CommandLineArgs If s.ToLower.StartsWith(inputArgument) Then inputName = s.Remove(0, inputArgument.Length) End If Next Alternatively, I can get multiple unnamed parameters from the command line using My.Application.CommandLineArgs But this requires that the parameters all be passed in the same order/format each time. I need to be able to pass a random subset of parameters each time. Ultimately, what I would like to be able to do, is separate each argument and value, and load it into a multidimentional array for later use. I know that I could find a way to do this by separating the string at the "=" and stripping the "/", but as I am somewhat new to this, I wanted to see if there was a "preffered" way for dealing with multiple named parameters?

    Read the article

  • .NET: Split web application into multiple DLLs?

    - by aximili
    Is it possible to compile some code-behind (.cs) files (eg. all .cs file under a particular folder) into one DLL, and the rest into another DLL? We have some common codes and pages (aspx + cs files) that we want to use across many websites. We want this to compile into a DLL (eg. Common.dll). The rest of the files will be website-specific, unique to each website and should compile into another DLL (eg. Website3.dll) This is so that if we make changes to a common code-behind, we can just publish Common.dll onto all our websites. Is that possible using VS Web Developer Express 2008? Thanks in advance. EDIT: We are already using a class library, but not for pages (aspx+cs)

    Read the article

  • Zip up groups of webpages for viewing in the browser

    - by Arlen Beiler
    I think there should be a standard for saving and viewing bunches of webpages as a website. For instance, say I have a whole bunch of pages, such as I get from the WordPress plugin "Really Static" (which saves the entire site), and I have all the links start with a slash (to make linking to supporting files easier). Now, I can't really use those links if I am reading it from the file system. If there would be a standard where we could zip up files, give them a unique extension (like "hzip" for html zip), and open the file with any browser, which would display it as though the root of that file were the root of the pages. "file://examplefile.hzip/" The links would then all work. This would really help sharing and copying groups of webpages. Is this a good idea? A bad one? What do you think?

    Read the article

  • How do common web frameworks (Django, Rails, Symfony, etc) handle multiple instances of the same plu

    - by Steven Wei
    Do any of the popular web frameworks solve this problem well? Here's an example: suppose you're running one of these web frameworks and you want to install a blog plugin. Except instead of a single blog, you need to run two separate instances of the blog plugin, and you want to keep them segregated. Or say you want to install multiple instances of a user authentication plugin, because you want to segregate your administrative users from your customer user accounts. Or say you want to install multiple instances of a wiki plugin for different parts of your site, or multiple instances of a comments plugin, or whatever else. It seems to me that at the basic level, each instance of plugin would need to be able to configured with a different set of database tables, and would need to be 'installed' at a different URL path. My experience is mostly with Django and Symfony, and I haven't seen a clean solution to this problem in either of them. They both tend to assume that each plugin (or app, in Django's case) is only ever going to be installed once. I'm curious if the Rails folks have figured out a clean solution to this problem, or any other framework authors (in any language). And if you were going to design a solution to this problem, what would it look like?

    Read the article

  • How can I detect if a file is binary (non-text) in python?

    - by grieve
    How can I tell if a file is binary (non-text) in python? I am searching through a large set of files in python, and keep getting matches in binary files. This makes the output look incredibly messy. I know I could use grep -I, but I am doing more with the data than what grep allows for. In the past I would have just searched for characters greater than 0x7f, but utf8 and the like make that impossible on modern systems. Ideally the solution would be fast, but any solution will do.

    Read the article

  • (win32) What to do when a file remains left open when a remote application crashes or forgets to clo

    - by Stephane R.
    Hi I have not worked so much with files: I am wondering about possible issues with accessing remote files on another computer. What if the distant application crashes and doesn't close the file ? My aim is to use this win32 function: HFILE WINAPI OpenFile(LPCSTR lpFileName, LPOFSTRUCT lpReOpenBuff, UINT uStyle); Using the flag OF_SHARE_EXCLUSIVE assures me that any concurrent access will be denied (because several machines are writing to this file from time to time). But what if the file is left open ? (application crash for example ?) How to put the file back to normal ?

    Read the article

  • Vb6 project files and source safe

    - by Andrew
    A part of the application that I am working on is a legacy Vb6 Windows forms application. All the files in the project are under source control (VSS) except the Vb6 project file. From what I can establish from the other developers working on the project the reason for this is that the com components used in the projects have different references on each developers machine. I want to move the project files into VSS so that when files are added to the project these can be updated in the project files and other developers (and more importantly an automated build script) can get the latest project files from source safe. Does anyone know if/how I can achieve this in such a way as to not corrupt the references to other com components on different development machines?

    Read the article

  • What server log file(with Centos5) must I check to locate specific IP activity?

    - by makmour
    Hi! I leasing a dedi server under Centos5, just recently I ve setup a blog website that grabs feeds from other news websites and presents them on my blog. Some days after I see high server loads that cant be coming just from the cron jobs that I run every now and then to grab feeds from other websites for my blog website. Thats why I m trying to pico some log files in order to have a better look the whole problem. These last days I noticed(from statscounter service) the same IP address visiting my blog website many times per day so I want to find out what us trying to do. I tried looking in all /var/log log files and httpd too but no luck. Is there any other log file I should open or any other procedure to track this IP acivity on server?

    Read the article

  • Adding files to the DPR file vs project paths in Delphi 2010

    - by Robert McCabe
    We are just migrating from D7 to D2010 and are having a debate about cleaning up the project paths. We have a number of directories with a large number of Pas files that are included on some project paths, but only a few of the files are actually used by any single project. One option is to eliminate the project paths completely and only have all used files in the dpr. The second option is to keep only the needed files in the dpr and have project paths to the directories for the rest of the files. Is there any argument for one option over the other?

    Read the article

  • Issue with Multiple Text Fields and SharedObject Storing

    - by user1662660
    I'm currently working on an AIR for iOS application in Flash CS6. I'm trying to store multiple pieces of data from various text inputs; i.e "name_txt", "number_txt" etc. I have the following code working for a local save file; import flash.events.Event; import flash.desktop.NativeApplication; import flash.events.Event; var n1:String = so.data.Number1; var so:SharedObject = SharedObject.getLocal("TravelPal"); emerg1.text = n1; emerg1.addEventListener(Event.CHANGE, updateEmerg1); function updateEmerg1 (e:Event):void { so.data.Number1 = emerg1.text; so.flush(); } NativeApplication.nativeApplication.addEventListener(Event.EXITING, onExit); function onExit(e:Event):void { so.flush(); } Now as soon as I create multiple text inputs and attempt to store them in my SharedObject, the whole system just falls apart. None of the text gets saved, even the previously working ones. I'm pretty new to ShardObject usage. What am I missing here? Is this a good way to go about storing multiple text inputs?

    Read the article

  • Diff multiple files in perforce across a revision range

    - by Thanatos
    I'd like to diff a bunch of lines across several revisions. Like, I'd like to see a.c, b.c, and c.c from changelist X to changelist Y. p4 diff2 a.c@X a.c@Y (where X & Y are changelist numbers) seems to work, but only sometimes. Specifically, if a.c is non-existent at X, I don't get a diff. I'd like to be able to get the diff (even though it'll be the whole file with only adds) anyways. To get the bigger picture: I have several files, across several commits, and I'd like to merge the diffs of these files in these commits, to basically say "this is a diff of what changed in this set of files during this set of changelists"

    Read the article

  • iPhone toolbar shared by multiple views

    - by codemonkey
    Another iPhone noob question. The app I'm building needs to show a shared custom UIToolbar for multiple views (and their subviews) within a UITabBarController framework. The contents of the custom toolbar are the same across all the views. I'd like to be able to design the custom toolbar as a xib and handle UI events from its own controller class (I'm assuming I can subclass UIToolbar to do so?). That way I could define IBOutlet & IBAction items, etc. Then I could associate this custom toolbar with eachs of the UITabBarController views (and their subviews). But I'm having trouble finding out whether that's possible - and if so, how to do it. In particular, I want to be able to push new views onto UINavigationControllers that are each associated with parent UITabBarController tabs. So, to summarize, I want a: custom toolbar shared by multiple views which are managed by multiple navigation controllers and the navigation controllers are associated with different tabs of a parent tab bar controller The tab bar controller itself is launched modally, though I don't believe that's relevant. Anyway, the tab bar controller is working, as are its child navigation controllers. I'm just having a little trouble figuring out how to persist the shared toolbar to the various subviews. I'd settle for a good clean way of implementing programmatically... though I'd prefer the flexibility of keeping the toolbar's visual design in a xib. Anyone have any suggestions?

    Read the article

  • [C++] Is it possible to use threads to speed up file reading ?

    - by Mister Mystère
    Hi there, I want to read a file as fast as possible (40k lines) [Edit : the rest is obsolete]. Edit: Andres Jaan Tack suggested a solution based on one thread per file, and I want to be sure I got this (thus this is the fastest way) : One thread per entry file reads it whole and stocks its content in a container associated (- as many containers as there are entry files) One thread calculates the linear combination of every cell read by the input threads, and stocks the results in the exit container (associated to the output file). One thread writes by block (every 4kB of data, so about 10 lines) the content of the output container. Should I deduce that I must not use m-mapped files (because the program's on standby waiting for the data) ? Thanks aforehand. Sincerely, Mister mystère.

    Read the article

  • Mulltiple configurations in Qt

    - by user360607
    Hi all! I'm new to Qt Creator and I have several questions regarding multiple build configurations. A side note: I have the QtCreator 1.3.1 installed on my Linux machine. I need to have two configurations in my Qt Creator project. The thing is that these aren't simply debug and release but are based on the target architecture - x86 or x64. I came across http://stackoverflow.com/questions/2259192/building-multiple-targets-in-qt-qmake and from that I went trying something like: Conf_x86 { TARGET = MyApp_x86 } Conf_x64 { TARGET = MyApp_x64 } This way however I don't seems to be able to use the Qt Creator IDE to build each of these separately (Build All, Rebuild All, etc. options from the IDE menu). Is there a way to achieve this - may be even show Conf_x86 and Conf_x64 as new build configurations in Qt Creator? One other thing the Qt I have is 64 bit so by default the target built using Qt Creator IDE will also be 64 bit. I noticed that the effective qmake call in the build step includes the following option '-spec linux-g++-64'. I also noticed that should I add '-spec linux-g++-32' in 'Additional arguments' it would override '-spec linux-g++-64' and the resulting target will be 32 bit. How can I achieve this by simply editing the contents of the .pro file? I saw that all these changes are initially saved in the .pro.user file but does doesn't suit me at all. I need to be able to make these configurations from the .pro file if possible. Any help will be appreciated. 10x in advance!

    Read the article

  • Can StructureMap be configured so that one can use different .config settings based on whether the p

    - by Mark Rogers
    I know that in StructureMap I can read from my *.config files (or files referenced by them), when I want to pass specific arguments to an object's constructor. ForRequestedType<IConfiguration>() .TheDefault.Is.OfConcreteType<SqlServerConfiguration>() .WithCtorArg("db_server_address") .EqualToAppSetting("data.db_server_address") But what I would like to do is read from one config setting in debug mode and another in release mode. Sure I could surround the .EqualToAppSetting("data.db_server_address"), with #if DEBUG, but for some reason those statements make me cringe a little when I put them in. I'd like to know if there was some way to do this with the StructureMap library itself. So can I feed my objects different settings based on whether the project is built in debug or release mode?

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • Doctrine: Unable to execute either CROSS JOIN or SELECT FROM Table1, Table2?

    - by ropstah
    Using Doctrine I'm trying to execute either a 1. CROSS JOIN statement or 2. a SELECT FROM Table1, Table2 statement. Both seem to fail. The CROSS JOIN does execute, however the results are just wrong compared to executing in Navicat. The multiple table SELECT doesn't event execute because Doctrine automatically tries to LEFT JOIN the second table. The cross join statement (this runs, however it doesn't include the joined records where the refClass User_Setting doesn't have a value): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u CROSS JOIN Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key)); And the select from multiple tables (this doesn't event run. It does not spot the many-many relationship between User and Setting in the first ->from() part and throws an exception: "User_Setting" with an alias of "us" in your query does not reference the parent component it is related to.): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u, Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key));

    Read the article

  • Config file format

    - by Felics
    Hello, does anyone knows a file format for configuration files easy to read by humans? I want to have something like tag = value where value may be: String Number(int or float) Boolean(true/false) Array(of String values, Number values, Boolean values) Another structure(it will be more clear what I mean in the fallowing example) Now I use something like this: IntTag=1 FloatTag=1.1 StringTag="a string" BoolTag=true ArrayTag1=[1 2 3] ArrayTag2=[1.1 2.1 3.1] ArrayTag3=["str1" "str2" "str3"] StructTag= { NestedTag1=1 NestedTag2="str1" } and so on. Parsing is easy but for large files I find it hard to read/edit in text editors. I don't like xml for the same reason, it's hard to read. INI does not support nesting and I want to be able to nest tags. I also don't want a complicated format because I will use limited kind of values as I mentioned above. Thanks for any help.

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >