Search Results

Search found 14131 results on 566 pages for 'of note'.

Page 428/566 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • using libcurl to check if a file exists on a SFTP site

    - by Snazzer
    I'm using C++ with libcurl to do SFTP/FTPS transfers. Before uploading a file, I need to check if the file exists without actually downloading it. If the file doesn't exist, I run into the following problems: //set up curlhandle for the public/private keys and whatever else first. curl_easy_setopt(CurlHandle, CURLOPT_URL, "sftp://user@pass:host/nonexistent-file"); curl_easy_setopt(CurlHandle, CURLOPT_NOBODY, 1); curl_easy_setopt(CurlHandle, CURLOPT_FILETIME, 1); int result = curl_easy_perform(CurlHandle); //result is CURLE_OK, not CURLE_REMOTE_FILE_NOT_FOUND //using curl_easy_getinfo to get the file time will return -1 for filetime, regardless //if the file is there or not. If I don't use CURLOPT_NOBODY, it works, I get CURLE_REMOTE_FILE_NOT_FOUND. However, if the file does exist, it gets downloaded, which wastes time for me, since I just want to know if it's there or not. Any other techniques/options I'm missing? Note that it should work for ftps as well. Edit: This error occurs with sftp. With FTPS/FTP I get CURLE_FTP_COULDNT_RETR_FILE, which I can work with.

    Read the article

  • edmx - The operation could not be completed - When adding Inheritance

    - by vdh_ant
    Hey guys I have an edmx model which I have draged 2 tables onto - One called 'File' and the other 'ApplicaitonFile'. These two tables have a 1 to 1 relationship in the database. If I stop here everything works fine. But in my model, I want 'ApplicaitonFile' to inherit from 'File'. So I delete the 1 to 1 relationship then configure 'ApplicaitonFile' from 'File' and then remove the FileId from 'ApplicaitonFile' which was the primary key. (Note I am following the instructions from here). If I leave the model open at this point everything is fine, but as soon as I close it, if I try and reopen it again I get the following error "The operation could not be completed". I have been searching for a solution and found this - http://stackoverflow.com/questions/944050/entity-model-does-not-load but as far as I can tell I don't have a duplicate InheritanceConnectors (although I don't know exactly what I'm looking for but I can't see anything out of the ordinary - like 2 connectors with the same name) and the relationship I originally have is a 1 to 1 not a 1 to 0..1 Any ideas??? this is driving me crazy...

    Read the article

  • Problem uploading app to google app engine

    - by Oberon
    I'm having problems uploading an app to the google-app-engine from my work place. I believe the problem is related to proxy, because I do not see the same problem when following the same procedure from home. (I do not specify HTTP_PROXY from home). These are the commands I run: HTTP_PROXY=http://proxy.<thehostname>.com:8080 HTTP_PROXY=https://proxy.<thehostname>.com:8080 appcfg.py --insecure update myappfolder When running the commands I get prompted for email and password, as expected, but after that it immediately exits with this errormessage: Error 302: --- begin server output --- <HTML> <HEAD> <TITLE>Moved Temporarily</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000"> <H1>Moved Temporarily</H1> The document has moved <A HREF="https://www.google.com/accounts/ClientLogin">here</A>. </BODY> </HTML> --- end server output --- Note: I added the --insecure option because else it gave a warning of missing ssl module. Any idea how to solve or workaround this problem?

    Read the article

  • How do I create a Status Icon / System Tray Icon with custom text and transparent background using P

    - by Raugturi
    Here is the code that I have so far to define the icon: icon_bg = gtk.gdk.pixbuf_new_from_file('gmail.png') w, h = icon_bg.get_width(), icon_bg.get_height() cmap = gtk.gdk.Colormap(gtk.gdk.visual_get_system(), False) drawable = gtk.gdk.Pixmap(None, w, h, 24) drawable.set_colormap = cmap gc = drawable.new_gc() drawable.draw_pixbuf(gc, icon_bg, 0, 0, 0, 0, w, h) drawn_icon = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8, w, h) drawn_icon.get_from_drawable(drawable, cmap, 0, 0, 0, 0, w, h) icon = gtk.status_icon_new_from_pixbuf(drawn_icon) This works to get the png into the icon, but falls short in two areas. First, transparency is not working. If I use a 22x22 png with transparent background and the image centered, I end up with sections of other active icons showing up inside of mine, like this: http://i237.photobucket.com/albums/ff311/Raugturi/22x22_image_with_transparency.png The icon it choose to steal from is somewhat random. Sometimes it's part of the dropbox icon, others the NetworkManager Applet. If I instead use this code: icon_bg = gtk.gdk.pixbuf_new_from_file('gmail.png') w, h = icon_bg.get_width(), icon_bg.get_height() cmap = gtk.gdk.Colormap(gtk.gdk.visual_get_system(), False) drawable = gtk.gdk.Pixmap(None, w, h, 24) drawable.set_colormap = cmap gc = drawable.new_gc() drawable.draw_pixbuf(gc, icon_bg, 0, 0, 0, 0, w, h) drawn_icon = gtk.gdk.Pixbuf(gtk.gdk.COLORSPACE_RGB, False, 8, 22, 22) drawn_icon.get_from_drawable(drawable, cmap, 0, 0, 3, 6, w, h) icon = gtk.status_icon_new_from_pixbuf(drawn_icon) And an image that is only 16x11 with the transparent edges removed, what I end up with is this: Same URL but file is 16x11_image_positioned_in_middle.png So how do I end up with a transparent block like the 1st one that doesn't pull in stuff from other icons? As for the second problem, I need the ability to write on the image before converting it to the icon. I tried using draw_glyphs and it told me I should be using Pango layout/context instead. Unfortunately all the Pango tutorials I could find deal with actual windows, not the status icon. Is there a good tutorial out there for Pango that would apply to this issue (and also maybe have at least some explanation of how to tell it what font to use as all of them that I found seem to lack this and it won't write anything without it). Note: Sorry for the lack of actual images and only one working link, apparently this is a spam prevention feature due to my lack of reputation.

    Read the article

  • where does a novice begin with error logging in asp.net c# ?

    - by korben
    i'm a novice teaching myself asp.net in c# via trial and error learn by doing, unfortunately this means lots of errors! i have a custom errors page now that is basically a 404 so that site visitors don't get that ugly application error message .NET throws, but i WOULD like to be able to see what's going wrong myself as people use the site. so i'm looking to build or learn from a fairly basic error logging c# class, that will send the same information given in a browser when hitting a .NET error, send this into a TXT file and email me the error at the same time would be great i don't know where to even begin, can someone give me some pointers? an open source class that does this already that i could plugin and play with would work as well. otherwise some links or guidance on where to start reading would be great too. i sort of have a mental block on understand msdn info-dump pages though, i'm hoping to find some articles on real people talking about implementing the same thing themselves or something like that please note i'm not looking to use some extensive or complicated third party service for this, i'm hoping to learn from the process of implementing a concise customized one

    Read the article

  • select all values from a dimension for which there are facts in all other dimensions

    - by ideasculptor
    I've tried to simplify for the purposes of asking this question. Hopefully, this will be comprehensible. Basically, I have a fact table with a time dimension, another dimension, and a hierarchical dimension. For the purposes of the question, let's assume the hierarchical dimension is zip code and state. The other dimension is just descriptive. Let's call it 'customer' Let's assume there are 50 customers. I need to find the set of states for which there is at least one zip code in which EVERY customer has at least one fact row for each day in the time dimension. If a zip code has only 49 customers, I don't care about it. If even one of the 50 customers doesn't have a value for even 1 day in a zip code, I don't care about it. Finally, I also need to know which zip codes qualified the state for selection. Note, there is no requirement that every zip code have a full data set - only that at least one zip code does. I don't mind making multiple queries and doing some processing on the client side. This is a dataset that only needs to be generated once per day and can be cached. I don't even see a particularly clean way to do it with multiple queries short of simply brute-force iteration, and there are a heck of a lot of 'zip codes' in the data set (not actually zip codes, but the there are approximately 100,000 entries in the lower level of the hierarchy and several hundred in the top level, so zipcode-state is a reasonable analogy)

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • How do I enable mod_deflate for PHP files?

    - by DM.
    I have a Liquid Web VPS account, I've made sure that mod_deflate is installed and running/active. I used to gzip my css and js files via PHP, as well as my PHP files themselves... However, I'm now trying to do this via mod_deflate, and it seems to work fine for all files except for PHP files. (Txt files work fine, css, js, static HTML files, just nothing that is generated via a PHP file.) How do I fix this? (I used the "Compress all content" option under "Optimize Website" in cPanel, which creates an .htaccess file in the home directory (not public_html, one level higher than that) with exactly the same text as the "compress everything except images" example on http://httpd.apache.org/docs/2.0/mod/mod_deflate.html) .htaccess file: <IfModule mod_deflate.c> SetOutputFilter DEFLATE <IfModule mod_setenvif.c> # Netscape 4.x has some problems... BrowserMatch ^Mozilla/4 gzip-only-text/html # Netscape 4.06-4.08 have some more problems BrowserMatch ^Mozilla/4\.0[678] no-gzip # MSIE masquerades as Netscape, but it is fine # BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # NOTE: Due to a bug in mod_setenvif up to Apache 2.0.48 # the above regex won't work. You can use the following # workaround to get the desired effect: BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html # Don't compress images SetEnvIfNoCase Request_URI .(?:gif|jpe?g|png)$ no-gzip dont-vary </IfModule> <IfModule mod_headers.c> # Make sure proxies don't deliver the wrong content Header append Vary User-Agent env=!dont-vary </IfModule> </IfModule>

    Read the article

  • SEO google keyword position tools?

    - by Peterl86
    Hi guys, I want to check our google postions for several keywords every day and make a note in a spreadseet. At the moment, we have a student doing it but it's a rubbish job and it doesn't seem fair on them! Are there any tools available to automate this process? I have tried rankchecker by seobook.com, but although that should be exactly what im looking for when i set scheduled tasks in that, it doesnt work. Any tips would be appreciated, thanks! peter EDIT: Liam has suggested a Python script to do this, which unfortunately isnt something I'm very familiar with! If anyone knows of a good tutorial or something to help us with this, that would be brilliant. Update: Found a php script at seoscript.net which looks like a step in the right direction. But I cant get it to work! I get this error. Anyone more knowledgabe than me know how to fix that? I have PEAR installed. thanks again, Peter

    Read the article

  • Match subpatterns in any order

    - by Yaroslav
    I have long regexp with two complicated subpatters inside. How i can match that subpatterns in any order? Simplified example: /(apple)?\s?(banana)?\s?(orange)?\s?(kiwi)?/ and i want to match both of apple banana orange kiwi apple orange banana kiwi It is very simplified example. In my case banana and orange is long complicated subpatterns and i don't want to do something like /(apple)?\s?((banana)?\s?(orange)?|(orange)?\s?(banana)?)\s?(kiwi)?/ Is it possible to group subpatterns like chars in character class? UPD Real data as requested: 14:24 26,37 Mb 108.53 01:19:02 06.07 24.39 19:39 46:00 my strings much longer, but it is significant part. Here you can see two lines what i need to match. First has two values: length (14 min 24 sec) and size 26.37 Mb. Second one has three values but in different order: size 108.53 Mb, length 01 h 19 m 02 s and date June, 07 Third one has two size and length Fourth has only length There are couple more variations and i need to parse all values. I have a regexp that pretty close except i can't figure out how to match patterns in different order without writing it twice. (?<size>\d{1,3}\[.,]\d{1,2}\s+(?:Mb)?)?\s? (?<length>(?:(?:01:)?\d{1,2}:\d{2}))?\s* (?<date>\d{2}\.\d{2}))? NOTE: that is only part of big regexp that forks fine already.

    Read the article

  • forms problem in django 1.1

    - by alexarsh
    I have the following form: class ModuleItemForm2(forms.ModelForm): class Meta: model = Module_item fields = ('title', 'media', 'thumb', 'desc', 'default', 'player_option') The model is: class Module_item(models.Model): title = models.CharField(max_length=100) layout = models.CharField(max_length=5, choices=LAYOUTS_CHOICE) media = models.CharField(help_text='Media url', max_length=500, blank=True, null=True) conserv = models.ForeignKey(Conserv, help_text= 'Redirect to Conserv', blank=True, null=True) conserve_section = models.CharField(max_length=100, help_text= 'Section within the redirected Conserv', blank=True, null=True) parent = models.ForeignKey('self', help_text='Upper menu.', blank=True, null=True) module = models.ForeignKey(Module, blank=True, null=True) thumb = models.FileField(upload_to='sms/module_items/thumbs', blank=True, null=True) desc = models.CharField(max_length=500, blank=True, null=True) auto_play = models.IntegerField(help_text='Auto start play (miliseconds)', blank=True, null=True) order = models.IntegerField(help_text='Display order', blank=True, null=True) depth = models.IntegerField(help_text='The layout depth', blank=True, null=True) flow_replace = models.IntegerField(blank=True, null=True) default = models.IntegerField(help_text='The selected sub item (Note: Starting from 0)', blank=True, null=True) player_options = models.CharField(max_length=1000, null=True, blank=True) In my view I build form: module_item_form2 = ModuleItemForm2() print module_item_form2 And I get the following error on the print line: 'NoneType' object has no attribute 'label' It works fine with django 1.0.2. I see the error only in django 1.1. Do you have an idea what am I doing wrong? Regards, Arshavski Alexander.

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • how do I deconstruct COUNT()?

    - by user151841
    I have a view with some joins in it. I'm doing a select from that view with COUNT(*) as one of the columns of the select. I'm surprised by the number it's returning. Note that there is no GROUP BY nor aggregate column statement in the source view that the query is drawing from. How can I take it apart to see how it arrives at this number? I have three columns in the GROUP BY clause. SELECT column1, column2, column3, COUNT(*) FROM View GROUP BY column1, column2, column3 I get a result like +---------+---------+---------+----------+ | column1 | column2 | column3 | COUNT(*) | +---------+---------+---------+----------+ | value1 | valueA | value_a | 103 | +---------+---------+---------+----------+ | value2 | valueB | value_b | 56 | +---------+---------+---------+----------+ etc. I'd like to see how it arrives at that 103, 26, etc. In other words, I want to run a query that returns 103 rows of something, so that I know that I've expressed the query properly. I'm double-checking my work. I'm not saying that I think COUNT(*) doesn't work ( I know that "SELECT is not broken" ), what I want to double-check is exactly what I'm expressing in my query, because I think I've expressed the wrong thing, which would be why I'm getting unexpected values. I need to see more what I'm actually directing MySQL to count. So should I take them one by one, and try out each value in a WHERE clause? In other words, should I do SELECT column1 FROM View WHERE column1 = 'first_grouped_value' SELECT column1 FROM View WHERE column1 = 'second_grouped_value' SELECT column2 FROM View WHERE column1 = 'first_grouped_value' SELECT column2 FROM View WHERE column1 = 'second_grouped_value' and see the row count returned matches the COUNT(*) value in the grouped results? Because of confidentiality, I won't be able to post any of the query or database structure. All I'm asking for is a general technique to see what COUNT(*) is actually counting.

    Read the article

  • Adding defaults and indexes to a script/generate command in a Rails Template?

    - by charliepark
    I'm trying to set up a Rails Template that would allow for comprehensive set-up of a specific Rails app. Using Pratik Naik's overview (http://m.onkey.org/2008/12/4/rails-templates), I was able to set up a couple of scaffolds and models, with a line that looks something like this ... generate("scaffold", "post", "title:string", "body:string") I'm now trying to add in Delayed Jobs, which normally has a migration file that looks like this: create_table :delayed_jobs, :force => true do |table| table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually. table.text :handler # YAML-encoded string of the object that will do work table.text :last_error # reason for last failure (See Note below) table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future. table.datetime :locked_at # Set when a client is working on this object table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead) table.string :locked_by # Who is working on this object (if locked) table.timestamps end So, what I'm trying to do with the Rails template, is to add in that :default = 0 into the master template file. I know that the rest of the template's command should look like this: generate("migration", "createDelayedJobs", "priority:integer", "attempts:integer", "handler:text", "last_error:text", "run_at:datetime", "locked_at:datetime", "failed_at:datetime", "locked_by:string") Where would I put (or, rather, what is the syntax to add) the :default values in that? And if I wanted to add an index, what's the best way to do that?

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • XML Schema: Can I make some of an attribute's values be required but still allow other values?

    - by scrotty
    (Note: I cannot change structure of the XML I receive, I am only able to change how I validate it.) Let's say I can get XML like this: <Address Field="Street" Value="123 Main"/> <Address Field="StreetPartTwo" Value="Unit B"/> <Address Field="State" Value="CO"/> <Address Field="Zip" Value="80020"/> <Address Field="SomeOtherCrazyValue" Value="Foo"/> I need to create an XSD schema that validates that "Street", "State" and "Zip" must be present. But I don't care if "StreetPartTwo" or "SomeOTherCrazyValue" is present. If I knew that only the three I care about could be included, I could do this: <xs:element name="Address" type="addressType" maxOccurs="unbounded" minOccurs="3"/> <xs:complexType name="addressType"> <xs:attribute name="Field" use="required"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="Street"/> <xs:enumeration value="State"/> <xs:enumeration value="Zip"/> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> But this won't work with my case because I may also receive those other Address elements (that also have "Field" attributes) that I don't care about. Any ideas how I can ensure the stuff I care about is present but let the other stuff in too? TIA! Sean

    Read the article

  • Help with why my app crashed?

    - by Moshe
    I'm writing an iPad app that is a "kiosk" app. The iPad should be hanging on the wall and the app should just run. I did a test, starting the app last night (Friday, December 31) and letting it run. This morning, when I woke up, it was not running. I just checked the iPad's console and I can't figure out why it crashed. The iPad was plugged in and so the battery is not the issued. I did disable the idleTimer in my application delegate. The app was seen running as late as midnight last night. I would like to note that my app acts as a Bluetooth server through Game Kit and large portion of the console output is occupied by bluetooth status messages. When I opened the iPad, the app was paused and there was a system alert which prompted me to check an "Expiring Provisioning Profile". I tapped "dismiss" and the alert went away. The app crashed about a second after I dismissed the system alert. Any ideas how I can diagnose this problem? Why would my app crash? Here is my iPad's Console log, as copied from Xcode's organizer. Edit: A bit of Googling lead me to this site which says that alert views cause the app to lose focus. Could that be involved? What can I do to fix the problem? EDIT2: My Crash log describes the situation as: Application Specific Information: appname failed to resume in time Elapsed total CPU time (seconds): 10.010 (user 8.070, system 1.940), 100% CPU Elapsed application CPU time (seconds): 9.470, 95% CPU

    Read the article

  • array_map applied on a function with 2 parameters

    - by mat
    I've 2 arrays ($numbers and $letters) and I want to create a new array based on a function that combines every $numbers with every $letters. The parameters of this function involes the value of both $numbers and $letters. (Note: $numbers and $letters doesn't have the same amount of values). I need something like this: $numbers = array(1,2,3,4,5,6,...); $letters = array('a','b','c','d','e',...); function myFunction($x,$y){ // $output = some code that use $x and $y return $output; }; $array_1 = array( (myFunction($numbers[0],$letters[0])), (myFunction($numbers[0],$letters[1])), myFunction($numbers[0],$letters[2]), myFunction($numbers[0],$letters[3]), etc); $array_2 = array( (myFunction($numbers[1],$letters[0])), (myFunction($numbers[1],$letters[1])), myFunction($numbers[1],$letters[2]), myFunction($numbers[1],$letters[3]), etc); $array_3 = array( (myFunction($numbers[2],$letters[0])), (myFunction($numbers[2],$letters[1])), myFunction($numbers[2],$letters[2]), myFunction($numbers[2],$letters[3]), etc); ... $array_N = array( (myFunction($numbers[N],$letters[0])), (myFunction($numbers[N],$letters[1])), myFunction($numbers[N],$letters[2]), myFunction($numbers[N],$letters[3]), etc); $array = array($array_1, $array_2, $array_3, etc.); I know that this may work, but it's a lot of code, especially if I have a many values for each array. Is there a way to get the same result with less code? I tried this, but it's not working: $array = array_map("myFunction($value, $letters)",$numbers)); Any help would be appriciated!

    Read the article

  • What's the difference between these LINQ queries ?

    - by SnAzBaZ
    I use LINQ-SQL as my DAL, I then have a project called DB which acts as my BLL. Various applications then access the BLL to read / write data from the SQL Database. I have these methods in my BLL for one particular table: public IEnumerable<SystemSalesTaxList> Get_SystemSalesTaxList() { return from s in db.SystemSalesTaxLists select s; } public SystemSalesTaxList Get_SystemSalesTaxList(string strSalesTaxID) { return Get_SystemSalesTaxList().Where(s => s.SalesTaxID == strSalesTaxID).FirstOrDefault(); } public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { return Get_SystemSalesTaxList().Where(s => s.ZipCode == strZipCode).FirstOrDefault(); } All pretty straight forward I thought. Get_SystemSalesTaxListByZipCode is always returning a null value though, even when it has a ZIP Code that exists in that table. If I write the method like this, it returns the row I want: public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { var salesTax = from s in db.SystemSalesTaxLists where s.ZipCode == strZipCode select s; return salesTax.FirstOrDefault(); } Why does the other method not return the same, as the query should be identical ? Note that, the overloaded Get_SystemSalesTaxList(string strSalesTaxID) returns a record just fine when I give it a valid SalesTaxID. Is there a more efficient way to write these "helper" type classes ? Thanks!

    Read the article

  • Dojo StackContainer children are not resizing on browser maximise/restore

    - by Andy
    Hi. I have the following nested layout in a dojo 1.4 app: BorderContainer 1 -- Stack Container 1 ----BorderContainer 2 ----BorderContainer 3 The StackContainer is sized with width and height 100%. When I resize the browser window using maximise/restore, the StackContainer correctly resizes to the center region of it's parent BorderContainer. The problem I have is that the StackContainer children (BorderContainer 2 and 3) do not get resized to the StackContainer's contentBox. Is there something special you have to do to force a resize of StackContainer children? I have tried calling StackContainer1.resize() but this makes no difference. Thanks in advance. Additional information: Thanks for the reply peller. The widget hierachy that contains the StackContainer is actually a custom widget, so the StackContainer is not actually in a BorderContainer directly, but has its height and width explicitly set to 100%. This works and the StackContainer is resized correctly on browser maximise. The direct children of the stackcontainer are BorderContainers and it is these BorderContainers that do not get resized when the StackContainer is resized. One point to note is that when the StackContainer is created in markup, the stackcontainer children are empty divs. These divs are then used as placeholders for custom widget creation, e.g. var widget = new com.company.widget(params, placeholderDiv); where placeholderDiv is a direct child of the StackContainer in markup. Should I be adding the programatically created 'widget' to the stackcontainer using addChild instead?

    Read the article

  • Can I use foreign key restrictions to return meaningful UI errors with PHP

    - by Shane
    I want to start by saying that I am a big fan of using foreign keys and have a tendency to use them even on small projects to keep my database from being filled with orphaned data. On larger projects I end up with gobs of keys which end up covering upwards of 8 - 10 layers of data. I want to know if anyone could suggest a graceful way of handling 'expected errors' from the MySQL database in a way that I can construct meaningful messages for the end user. I will explain 'expected errors' with an example. Lets say I have a set of tables used for basic discussions: discussion questions responses users Hierarchically they would probably look something like this: -users --discussion ---questions ----responses When I attempt to delete a user the FKs will check discussions and if any discussion exist the deletion is restricted, deleting discussion checks questions, deleting questions checks responses. An 'expected error' in this case would be attempting to delete a user--unless they are newly created I can anticipate that one or more foreign keys will fail causing an error. What I WANT to do is to catch that error on deletion and be able to tell the end user something like 'We're sorry, but all discussions must be removed before you can delete this user...'. Now I know I can keep and maintain matching arrays in PHP and map specific errors to messages but that is messy and prone to becoming stagnant, or I could manually run a set of selects prior to attempting the deletion, but then I am doing just as much work as without using FKs. Any help here would be greatly appreciated, or if I am just looking at this completely wrong then please let me know. On a side note I generally use CodeIgniter for my application development, so if that would open up an avenue through that framework please consider that in your answers. Thanks in Advance

    Read the article

  • Who's setting TCP window size down to 0, Indy or Windows?

    - by François
    We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site). We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput. And we would be able to act upon it becoming 0 (nothing gets send, users wait = no good). So, any info, link, pointer to Indy code are welcome... Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-) Note: it's Indy9/D2007 on Windows Server 2003 SP2. More gory details: The TCP zero window cases happen on the middle tier talking to the DB server. It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation). 2 major Network issues causing bottlenecks have been identified. The TCP zero window happened when there was network congestion, but may or may not be caused by it. We want to know when that happen and have a way to do something (logging at least) in our code. So where to hook (in Indy?) to know when that condition occurs?

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >