Search Results

Search found 4690 results on 188 pages for 'multi tenant'.

Page 81/188 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Async file uploads in Firefox reset on any DOM change

    - by Vibhu
    I'm pretty sure this is a Firefox or flash-related bug, but I just want to check if anyone has ran into this problem or knows how to fix it. Basically, we have a multi-file upload widget for our highly dynamic web app (think Gmail). We've tried both uploadify for jQuery, and YUI uploader. We've also tried taking those out of our app interface and putting them in an iFrame. What happens is that in the event of any DOM manipulation, even if the uploader is in an iFrame, be it a tab change (in our web app) that covers the iframe temporarily, or a block, etc., the uploader will stop its current upload. In the case of YUI uploader, it fires the "contentReady" event again. This ONLY happens in Firefox. IE and Chrome are fine. In case you are wondering, we really don't have any custom needs here. Just need to have multi-upload file support, and we need to give people free reign to tab around in our interface while an upload is in progress. It seems like Yahoo! and Gmail have both solved this problem. How? What are we doing wrong?

    Read the article

  • delayed evaluation of code in subroutines - 5.8 vs. 5.10 and 5.12

    - by Brock
    This bit of code behaves differently under perl 5.8 than it does under perl 5.12: my $badcode = sub { 1 / 0 }; print "Made it past the bad code.\n"; [brock@chase tmp]$ /usr/bin/perl -v This is perl, v5.8.8 built for i486-linux-gnu-thread-multi [brock@chase tmp]$ /usr/bin/perl badcode.pl Illegal division by zero at badcode.pl line 1. [brock@chase tmp]$ /usr/local/bin/perl -v This is perl 5, version 12, subversion 0 (v5.12.0) built for i686-linux [brock@chase tmp]$ /usr/local/bin/perl badcode.pl Made it past the bad code. Under perl 5.10.1, it behaves as it does under 5.12: brock@laptop:/var/tmp$ perl -v This is perl, v5.10.1 (*) built for i486-linux-gnu-thread-multi brock@laptop:/var/tmp$ perl badcode.pl Made it past the bad code. I get the same results with a named subroutine, e.g. sub badcode { 1 / 0 } I don't see anything about this in the perl5100delta pod. Is this an undocumented change? A unintended side effect of some other change? (For the record, I think 5.10 and 5.12 are doing the Right Thing.)

    Read the article

  • How to create a high quality icon for my Windows application?

    - by Patrick Klug
    If you are running Windows with a higher DPI setting you will notice that most application icons on the desktop look terrible. Even high profile application icons such as Google Chrome look terrible while for example Firefox, Skype and MS Office icons look sharp: (example) I suspect that most icons look blurry because a lower resolution icon is scaled up rather than using a higher resolution icon. I want to give my application a high quality icon and can't seem to convince Windows to use the higher resolution icon. I have created a multi-resolution icon with the free icon editor IcoFX. The icon is provided in 16x16, 24x24, 32x32,48x48, 128x128 and 256x256 (!) (all in 32 bit including alpha channel) yet Windows seems to use the 128x128 version of the icon on the desktop and scale it up which looks terrible. (I am using Windows 7 - 64 bit - the icon is placed by means of setting up a shortcut in the msi (created via Visual Studio 2008 Setup Project) and pointing it to the .ico file that contains the multi-resolution icon) I have tried removing the 128x128 icon but to no avail. Interestingly in Windows Explorer the icon looks great even when using the Extra Large Icon setting. How can I create a high quality desktop icon that looks great on higher DPI settings on Windows?

    Read the article

  • asynchronous pages

    - by lockedscope
    I have just read the multi-threading and custom threading in asp.net articles. http://www.williablog.net/williablog/post/2008/12/16/Custom-Threading-in-ASPNET.aspx http://www.williablog.net/williablog/post/2008/12/16/Multi-Threading-in-ASPNET.aspx I have couple of questions. What does he mean by returning a thread to the pool? Is that thread completely removed from memory or put in to a state that it does not scheduled to CPU(is it in sleep state or whatever)? If that thread is removed from memory how could it survive after async point? How this mechanism works? Are every objects(pages class, request,response etc.) are copied to somewhere else before they are disposed? (Or, is it just waiting in a sleep state and then its waked when async call ends?) He is saying that; "Having said that, making pages asynchronous is not really about improving performance, it is about improving scalability" then he is saying; "I'm sorry to say that it will do nothing for scalability or performance." So which one is true? or for which case(s) are they true?

    Read the article

  • CURL - Problem with Authentication

    - by danit
    I need to add authentication to this function: function multiRequest($data, $options = array()) { // array of curl handles $curly = array(); // data to be returned $result = array(); // multi handle $mh = curl_multi_init(); // loop through $data and create curl handles // then add them to the multi-handle foreach ($data as $id => $d) { $curly[$id] = curl_init(); $url = (is_array($d) && !empty($d['url'])) ? $d['url'] : $d; curl_setopt($curly[$id], CURLOPT_URL, $url); curl_setopt($curly[$id], CURLOPT_HEADER, 0); curl_setopt($curly[$id], CURLOPT_RETURNTRANSFER, 1); // post? if (is_array($d)) { if (!empty($d['post'])) { curl_setopt($curly[$id], CURLOPT_POST, 1); curl_setopt($curly[$id], CURLOPT_POSTFIELDS, $d['post']); } } // extra options? if (!empty($options)) { curl_setopt_array($curly[$id], $options); } curl_multi_add_handle($mh, $curly[$id]); } // execute the handles $running = null; do { curl_multi_exec($mh, $running); } while($running > 0); // get content and remove handles foreach($curly as $id => $c) { $result[$id] = curl_multi_getcontent($c); curl_multi_remove_handle($mh, $c); } // all done curl_multi_close($mh); return $result; } I'm looking to add authentication to this function, something along these lines? curl_setopt($curly[$id], CURLOPT_USERPWD, "$username:$password"); Anyone help?

    Read the article

  • Is there a Standard or Best Practice for Perl Progams, as opposed to Perl Modules?

    - by swestrup
    I've written any number of perl modules in the past, and more than a few stand-alone perl programs, but I've never released a multi-file perl program into the wild before. I have a perl program that is almost at the beta stage and is going to be released open source. It requires a number of data files, as well as some external perl modules -- some I've written myself, and some from CPAN -- that I'll have to bundle with it so as to ensure that someone can just download my program and install it without worrying about hunting for obscure modules. So, it sounds to me like I need to write an installer to copy all the files to standard locations so that a user can easily install everything. The trouble is, I have no idea what the standard practice would be for this. I have found lots of tutorials on perl module standards, but none on perl program standards. Does anyone have any pointers to standard paths, installation proceedures, etc, for perl programs? This is going to be complicated by the fact that the program is multi-platform. I've been testing it in Linux, but its designed to work equally well in Windows.

    Read the article

  • Running bundle install fails trying to remote fetch from rubygems.org/quick/Marshal...

    - by dreeves
    I'm getting a strange error when doing bundle install: $ bundle install Fetching source index for http://rubygems.org/ rvm/rubies/ree-1.8.7-2010.02/lib/ruby/site_ruby/1.8/rubygems/remote_fetcher.rb:304 :in `open_uri_or_path': bad response Not Found 404 (http://rubygems.org/quick/Marshal.4.8/resque-scheduler-1.09.7.gemspec.rz) (Gem::RemoteFetcher::FetchError) I've tried bundle update, gem source -c, gem update --system, gem cleanup, etc etc. Nothing seems to solve this. I notice that the URL beginning with http://rubygems.org/quick does seem to be a 404 -- I don't think that's any problem with my network, though if that's reachable for anyone else then that would be a simple explanation for my problem. More hints: If I just gem install resque-scheduler it works fine: $ gem install resque-scheduler Successfully installed resque-scheduler-1.9.7 1 gem installed Installing ri documentation for resque-scheduler-1.9.7... Installing RDoc documentation for resque-scheduler-1.9.7... And here's my Gemfile: source 'http://rubygems.org' gem 'json' gem 'rails', '>=3.0.0' gem 'mongo' gem 'mongo_mapper', :git => 'git://github.com/jnunemaker/mongomapper', :branch => 'rails3' gem 'bson_ext', '1.1' gem 'bson', '1.1' gem 'mm-multi-parameter-attributes', :git=>'git://github.com/rlivsey/mm-multi-parameter-attributes.git' gem 'devise', '~>1.1.3' gem 'devise_invitable', '~> 0.3.4' gem 'devise-mongo_mapper', :git => 'git://github.com/collectiveidea/devise-mongo_mapper' gem 'carrierwave', :git => 'git://github.com/rsofaer/carrierwave.git' , :branch => 'master' gem 'mini_magick' gem 'jquery-rails', '>= 0.2.6' gem 'resque' gem 'resque-scheduler' gem 'SystemTimer' gem 'capistrano' gem 'will_paginate', '3.0.pre2' gem 'twitter', '~> 1.0.0' gem 'oauth', '~> 0.4.4'

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Using FBML in a ruby sinatra app

    - by Gearóid
    Hi, I'm building an application in ruby using the sinatra framework and am having trouble with rendering some fbml elements. I'm currently trying to render an fb:multi-friend-selector so the user can select which friends they want to invite. However, when I write the following in my code: <fb:fbml> <fb:request-form action="/inviteFriends" method="POST" invite="true" type="MY APP" content="Invite Friends" > <fb:multi-friend-selector showborder="false" actiontext="Invite your friends to use YOUR APP NAME."> </fb:request-form> </fb:fbml> Nothing renders with the text above. I've included the regular facebook xsds for the taglibs in my html tag and have tested fbml on the page using the following code: <fb:name useyou="false" uid="USER_ID" linked="false"/> This code works correctly and displays the user's name. I've tried a simple example like that on http://wiki.developers.facebook.com/index.php/Fb:random but again nothing is rendered in the browser. Do I need to include some special javascript or anything? I would greatly appreciate some help with this. Thanks in advance -gearoid.

    Read the article

  • How to access non-first matches with xpath in Selenium RC ?

    - by Gj
    I have 20 labels in my page: In [85]: sel.get_xpath_count("//label") Out[85]: u'20' And I can get the first one be default: In [86]: sel.get_text("xpath=//label") Out[86]: u'First label:' But, unlike the xpath docs I've found, I'm getting an error trying to subscript the xpath to get to the second label's text: In [87]: sel.get_text("xpath=//label[2]") ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (216, 0)) ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (1186, 0)) --------------------------------------------------------------------------- Exception Traceback (most recent call last) /Users/me/<ipython console> in <module>() /Users/me/selenium.pyc in get_text(self, locator) 1187 'locator' is an element locator 1188 """ -> 1189 return self.get_string("getText", [locator,]) 1190 1191 /Users/me/selenium.pyc in get_string(self, verb, args) 217 218 def get_string(self, verb, args): --> 219 result = self.do_command(verb, args) 220 return result[3:] 221 /Users/me/selenium.pyc in do_command(self, verb, args) 213 #print "Selenium Result: " + repr(data) + "\n\n" 214 if (not data.startswith('OK')): --> 215 raise Exception, data 216 return data 217 Exception: ERROR: Element xpath=//label[2] not found What gives?

    Read the article

  • Regex to add CDATA for mal formed XML

    - by AntonioCS
    Hey guys! I have this huge xml file (13 mb) and it has some malformed values. Here is a sample of the xml: <propertylist> <adprop index="0" proptype="type" value="Ft"/> <adprop index="0" proptype="category" value="Bs"/> <adprop index="0" proptype="subcategory" value="Bsm"/> <adprop index="0" proptype="description" value="MOONEN CUSTOM 58"/> </propertylist> Now this is ok. But I many other nodes that are not encapsulated in CDATA that need to be. The node that gives me problems is the <adprop index="0" proptype="description" value=""/> I created this regular expression: <adprop index="0" proptype="description" value="(.+)"\/> to catch that node and replace it with this: <adprop index="0" proptype="description" value="<![CDATA[\1]]>"\/> I run this in notepad++ and it works. The only problem is when the value="" is multi lined like: <adprop index="0" proptype="description" value="cutter that has demonstrated her offshore capabiliti from there to the Canaries with her current owner. Spacious homely interior with over 2m headroom and heaps of" /> It fails with this one, and there are plenty like this one. Can anyone help me out in the regular expression so that I can catch the value when it's multi lined? Thanks

    Read the article

  • Space-saving character encoding for japanese?

    - by Constantin
    In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach? Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!

    Read the article

  • How do I get rid of these warnings?

    - by Brian Postow
    This is really several questions, but anyway... I'm working with a big project in XCode, relatively recently ported from MetroWorks (Yes, really) and there's a bunch of warnings that I want to get rid of. Every so often an IMPORTANT warning comes up, but I never look at them because there's too many garbage ones. So, if I can either figure out how to get XCode to stop giving the warning, or actually fix the problem, that would be great. Here are the warnings: It claims that <map.h> is antiquated. However, when I replace it with <map> my files don't compile. Evidently, there's something in map.h that isn't in map... this decimal constant is unsigned only in ISO C90 This is a large number being compared to an unsigned long. I have even cast it, with no effect. enumeral mismatch in conditional expression: <anonymous enum> vs <anonymous enum> This appears to be from a ?: operator. Possibly that the then and else branches don't evaluate to the same type? Except that in at least one case, it's (matchVp == NULL ? noErr : dupFNErr) And since those are both of type OSErr, which is mac defined... I'm not sure what's up. It also seems to come up when I have other pairs of mac constants... multi-character character constant This one is obvious. The problem is that I actually NEED multi-character constants... -fwritable-strings not compatible with literal CF/NSString I unchecked the "Strings are Read-Only" box in both the project and target settings... and it seems to have had no effect...

    Read the article

  • Use of Syntactic Sugar / Built in Functionality

    - by Kyle Rozendo
    I was busy looking deeper into things like multi-threading and deadlocking etc. The book is aimed at both pseudo-code and C code and I was busy looking at implementations for things such as Mutex locks and Monitors. This brought to mind the following; in C# and in fact .NET we have a lot of syntactic sugar for doing things. For instance (.NET 3.5): lock(obj) { body } Is identical to: var temp = obj; Monitor.Enter(temp); try { body } finally { Monitor.Exit(temp); } There are other examples of course, such as the using() {} construct etc. My question is when is it more applicable to "go it alone" and literally code things oneself than to use the "syntactic sugar" in the language? Should one ever use their own ways rather than those of people who are more experienced in the language you're coding in? I recall having to not use a Process object in a using block to help with some multi-threaded issues and infinite looping before. I still feel dirty for not having the using construct in there. Thanks, Kyle

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Windsor IHandlerSelector in RIA Services Visual Studio 2010 Beta2

    - by Savvas Sopiadis
    Hi everybody! I want to implement multi tenancy using Windsor and i don't know how to handle this situation: i succesfully used this technique in plain ASP.NET MVC projects and thought incorporating in a RIA Services project would be similar. So i used IHandlerSelector, registered some components and wrote an ASP.NET MVC view to verify it works in a plain ASP.NET MVC environment. And it did! Next step was to create a DomainService which got an IRepository injected in the constructor. This service is hosted in the ASP.NET MVC application. And it actually ... works:i can get data out of it to a Silverlight application. Sample snippet: public OrganizationDomainService(IRepository<Culture> cultureRepository) { this.cultureRepository = cultureRepository; } Last step is to see if it works multi-tenant-like: it does not! The weird thing is this: using some line of code and writing debug messages in a log file i verified that the correct handler is selected! BUT this handler seems not to be injected in the DomainService. I ALWAYS get the first handler (that's the logic in my SelectHandler) Can anybody verify this behavior? Is injection not working in RIA Services? Or am i missing something basic?? Development environment: Visual Studio 2010 Beta2 Thanks in advance

    Read the article

  • How to include external classes in a GAE deployment?

    - by kodra
    I am using the Google plug-in for Eclipse and have the following problem: The project consists of a GWT based GUI talking to a server running on GAE and using JPA. Additionally there is a project to migrate the legacy data to the new datastore. Since these both project use common data model, I have extracted a set of interfaces and enums into a separate project and set the other two projects dependencies on it. The Java App project seems to work, but the GWT/GAE only works if I manually copy the classes into the WEB-INF/classes directory. Obviously this is only working when using the housted mode. Anybody knows how to configure such a multi project setup in Eclipse? Also, I am not sure if the multi project layout is the best solution. The set of common model objects is used in all 3 areas: user client (GWT project compiling standard folders client and shared) server side (providing services for GWT-RPC, uploading and different feeds) migration application (posting the legacy data to the upload servlet) What are the architectural options to keep the amount of duplicated classes on minimum?

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • how can i change selected value of drop list dynamically

    - by Deepak Gupta
    i want to pick the value from text box and then change the value of dropdown list according to that value <html> <head> <script> function change() { var value = document.getElementById('text').value; document.getElementById("Model").selectedvalue = value } </script> </head> <body> <asp:DropDownList ID="Model" AutoPostBack="false" runat="server" CssClass="styled"> <asp:ListItem Value="None">None</asp:ListItem> <asp:ListItem Value="Enum">Enum</asp:ListItem> <asp:ListItem Value="Sum">Sum</asp:ListItem> <asp:ListItem Value="Multi">Multi</asp:ListItem> <asp:ListItem Value="Xaxis">Xaxis</asp:ListItem> </asp:DropDownList> <input id="text" type="text"/> <input type="button" onclick="change();"/> </body> <html>

    Read the article

  • jQueryUI Modal confirmation dialog on form submission

    - by DavidYell
    I am trying to get a modal confirmation dialog working when a user submits a form. My approach, I thought logically, would be to catch the form submission. My code is as follows, $('#multi-dialog-confirm').dialog({ autoOpen: false, height: 200, modal: true, resizable: false, buttons: { 'Confirm': function(){ //$(this).dialog('close'); return true; }, 'Cancel': function(){ $(this).dialog('close'); return false; } } }); $('#completeform').submit(function(e){ e.preventDefault(); var n = $('#completeform input:checked').length; if(n == 0){ alert("Please check the item and mark as complete"); return false; }else{ var q = $('#completeform #qty').html(); if(q > 1){ $('#multi-dialog-confirm').dialog('open'); } } //return false; }); So I'm setting up my dialog first. This is because I'm pretty certain that the scope of the dialog needs to be at the same level as the function which calls it. However, the issue is that when you click 'Confirm' nothing happens. The submit action does not continue. I've tried $('#completeform').submit(); also, which doesn't seem to work. I have tried removing the .preventDefault() to ensure that the form submission isn't completely cancelled, but it doesn't seem to make a difference between that and returning false. Not checking the box, show and alert fine. Might change to dialog at some point ;), clicking 'Cancel' closes the dialog and remains on the page, but the elusive 'Confirm' buttons seems to not continue with the form submission event. If anyone can help, I'd happily share my lunch with you! ;)

    Read the article

  • curl_multi_exec stops if one url is 404, how can I change that?

    - by Rob
    Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions: 1: Why does it stop? That doesn't make sense to me. 2: How can I make it continue? EDIT: Here is my code: $SQL = mysql_query("SELECT url FROM shells") ; $mh = curl_multi_init(); $handles = array(); while($resultSet = mysql_fetch_array($SQL)){ //load the urls and send GET data $ch = curl_init($resultSet['url'] . $fullcurl); //Only load it for two seconds (Long enough to send the data) curl_setopt($ch, CURLOPT_TIMEOUT, 5); curl_multi_add_handle($mh, $ch); $handles[] = $ch; } // Create a status variable so we know when exec is done. $running = null; //execute the handles do { // Call exec. This call is non-blocking, meaning it works in the background. curl_multi_exec($mh,$running); // Sleep while it's executing. You could do other work here, if you have any. sleep(2); // Keep going until it's done. } while ($running > 0); // For loop to remove (close) the regular handles. foreach($handles as $ch) { // Remove the current array handle. curl_multi_remove_handle($mh, $ch); } // Close the multi handle curl_multi_close($mh);

    Read the article

  • Getting error when compiling debug mode: C++/CLI - error LNK2022

    - by Yochai Timmer
    I've got a CLI code wrapping a C++ DLL. When i try to compile it in debug mode, i get the following error: Error 22 error LNK2022: metadata operation failed (8013118D) : Inconsistent layout information induplicated types .... MSVCMRTD.lib (locale0_implib.obj) The weird thing is that on Release mode it compiles OK and works OK. The only difference i can see that causes the problem is when i change: Configuration Properties - C/C++ - Code Generation - Runtime Library When it's set to: Multi-threaded Debug DLL (/MDd) it throws the error. When it's set to: Multi-threaded DLL (/MD) it compiles fine. The same settings work for all the other DLLs in the project (CLI and C++) and they inherit the same properties. I'm using VS2010. So, how can i solve this ? And can I get some explanation to WHY this is happening ? Update: I've basically tried changing every option in the project's properties with no luck. I've read somewhere that this might be caused from duplicate declarations of a type of the same name. But in the CLI file i'm calling std::string etc. explicitly from std. Any other ideas ?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Get settings through a button action

    - by Russ Knudsen
    I am looking for a way to access user settings (I assume, NSUserDefaults?) through a button action. Let me back up and explain. What I have right now are 2 TextFields a label and a button. The user will type in measurements in the 2 TextFields. When they hit the button the label displays the volume of the measured object in Gallons. That part of it works great. Then I wanted to give the user options to output the volume in Liters instead of gallons. I would also like to give the user options to type in the measurements in Centimeters. So I setup a 'Settings.Bundle' and configured it with 2 'Multi Value' cells (Measurement units and Volumetric Units). Each Multi Value cell has its own list of different units the user can pick from. My main issue is I don't know how to access these settings through the button action. I may be thinking of this wrong, but what I'm looking for is something like; Button Action If settings key = 0 Then do the math in Inches, Display in Gallons If settings key = 1 Then do the math in Centimeters, Display in Gallons If settings key = 2 Then do the math in Inches, Display in Liters If settings key = 3 Then do the math in Centimeters, Display in Liters Etc... Is this possible? Am I thinking of this in the wrong way? What's the best way to do this?

    Read the article

  • Multithreading A Function in VB.Net

    - by Ben
    I am trying to multi thread my application so as it is visible while it is executing the process, this is what I have so far: Private Sub SendPOST(ByVal URL As String) Try Dim DataBytes As Byte() = Encoding.ASCII.GetBytes("") Dim Request As HttpWebRequest = TryCast(WebRequest.Create(URL.Trim & "/webdav/"), HttpWebRequest) Request.Method = "POST" Request.ContentType = "application/x-www-form-urlencoded" Request.ContentLength = DataBytes.Length Request.Timeout = 1000 Request.ReadWriteTimeout = 1000 Dim PostData As Stream = Request.GetRequestStream() PostData.Write(DataBytes, 0, DataBytes.Length) Dim Response As WebResponse = Request.GetResponse() Dim ResponseStream As Stream = Response.GetResponseStream() Dim StreamReader As New IO.StreamReader(ResponseStream) Dim Text As String = StreamReader.ReadToEnd() PostData.Close() Catch ex As Exception If ex.ToString.Contains("401") Then TextBox2.Text = TextBox2.Text & URL & "/webdav/" & vbNewLine End If End Try End Sub Public Sub G0() Dim siteSplit() As String = TextBox1.Text.Split(vbNewLine) For i = 0 To siteSplit.Count - 1 Try If siteSplit(i).Contains("http://") Then SendPOST(siteSplit(i).Trim) Else SendPOST("http://" & siteSplit(i).Trim) End If Catch ex As Exception End Try Next End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim t As Thread t = New Thread(AddressOf Me.G0) t.Start() End Sub However, the 'G0' sub code is not being executed at all, and I need to multi thread the 'SendPOST' as that is what slows the application.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >