Search Results

Search found 19928 results on 798 pages for 'matt null'.

Page 183/798 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Globals are bad! But should I use them in this context?

    - by Matt
    Would setting the $link to my database be one thing that I should use a GLOBAL scope for? In my setting of (lots of functions)...it seems as though having only one variable that is in the global scope would be wise. I am currently using the functions to transfer it back and forth so that way I do not have it in the global scope. But it is a bit of a hindrance to my script. Please advise.

    Read the article

  • Numeric Data Entry in WPF

    - by Matt Hamilton
    How are you handling the entry of numeric values in WPF applications? Without a NumericUpDown control, I've been using a TextBox and handling its PreviewKeyDown event with the code below, but it's pretty ugly. Has anyone found a more graceful way to get numeric data from the user without relying on a third-party control?private void NumericEditPreviewKeyDown(object sender, KeyEventArgs e) { bool isNumPadNumeric = (e.Key >= Key.NumPad0 && e.Key <= Key.NumPad9) || e.Key == Key.Decimal; bool isNumeric = (e.Key >= Key.D0 && e.Key <= Key.D9) || e.Key == Key.OemPeriod; if ((isNumeric || isNumPadNumeric) && Keyboard.Modifiers != ModifierKeys.None) { e.Handled = true; return; } bool isControl = ((Keyboard.Modifiers != ModifierKeys.None && Keyboard.Modifiers != ModifierKeys.Shift) || e.Key == Key.Back || e.Key == Key.Delete || e.Key == Key.Insert || e.Key == Key.Down || e.Key == Key.Left || e.Key == Key.Right || e.Key == Key.Up || e.Key == Key.Tab || e.Key == Key.PageDown || e.Key == Key.PageUp || e.Key == Key.Enter || e.Key == Key.Return || e.Key == Key.Escape || e.Key == Key.Home || e.Key == Key.End); e.Handled = !isControl && !isNumeric && !isNumPadNumeric; }

    Read the article

  • Mobile App Data Syncronization

    - by Matt Rogish
    Let's say I have a mobile app that uses HTML5 SQLite DB (and/or the HTML5 key-value store). Assets (media files, PDFs, etc.) are stored locally on the mobile device. Luckily enough, the mobile device is a read-only copy of the "centralized" storage, so the mobile device won't have to propagate changes upstream. However, as the server changes assets (creates new ones, modifies existing, deletes old ones) I need to propagate those changes back to the mobile app. Assume that server changes are grouped into changesets (version number n) that contain some information (added element XYZ, deleted id = 45, etc.) and that the mobile device has limited CPU/bandwidth, so most of the processing has to take place on the server. I can think of a couple of methods to do this. All have trade-offs and at this point, I'm unsure which is the right course of action... Method 1: For change set n, store the "diff" of the current n and previous n-1. When a client with version y asks if there have been any changes, send the change sets from version y up to the current version. e.g. added item 334, contents: xxx. Deleted picture 44. Deleted PDF 11. Changed 33. added picture 99. Characteristics: Diffs take up space, although in theory would be kept small. However, all diffs must be kept around indefinitely (should a v1 app have not been updated for a year, must apply v2..v100). High latency devices (mobile apps) will incur a penalty to send lots of small files (assume cannot be zipped or tarr'd up into one file) Very few server CPU resources required, as all it does is send the client a list of files "Dumb" - if I change an item in change set 3, and change it to something else in 4, the client is going to perform both actions, even though #3 is rendered moot by #4. Or, if an asset is added in #4 and removed in #5 - the client will download a file just to delete it later. Method 2: Very similar to method 1 except on the server, do some sort of a diff between the change sets represented by the app version and server version. Package that up and send that single change set to the client. Characteristics: Client-efficient: The client only has to process one file, duplicate or irrelevant changes are stripped out. Server CPU/space intensive. The change sets must be diff'd and then written out to a file that is then sent to the client. Makes diff server scalability an issue. Possibly ways to cache the results and re-use them, but in the wild there's likely to be a lot of different versions so the diff re-use has a limit Diff algorithm is complicated. The change sets must be structured in such a way that an efficient and effective diff can be performed. Method 3: Instead of keeping diffs, write out the entire versioned asset collection to a mobile-database import file. When client requests an update, send the entire database to client and have them update their assets appropriately. Characteristics: Conceptually simple -- easy to develop and deploy Very inefficient as the client database is restored every update. If only one new thing was added, the whole database is refreshed. Server space and CPU efficient. Only the latest version DB needs kept around and the server just throws the file to the client. Others?? Thoughts? Thanks!!

    Read the article

  • How to prevent MSBUILD from copying dependent GAC assemblies to bin

    - by Matt Wrock
    I have a msbuild task that builds my solution and I am migrating it from .net 3.5 to 4.0. I have some dependent DLLs that have Local Copy set to true. The 4.0 version of msbuild is not only copying the dependent DLL (which I want), it is also copying all dependent assemblies of that DLL from the 32 bit version of the GAC to my bin. Not only do I not want these files being copied from the GAC, I especially do not want the 32 bit versions for this 64 bit build. Has the behavior changed in msbuild 4.0? And does anyone know how to force msbuild to use the behavior in 3.5?

    Read the article

  • Writing a Jeweler Rakefile that adds dependencies depending on RUBY_ENGINE (ruby or jruby)

    - by Matt Zukowski
    I have a Rakefile that includes this: Jeweler::Tasks.new do |gem| # ... gem.add_dependency('json') end The gemspec that this generates builds a gem that can't be installed on jruby because the 'json' gem is native. For jruby, this would have to be: Jeweler::Tasks.new do |gem| # ... gem.add_dependency('json-jruby') end How do I conditionally add the dependency for 'json-jruby' when RUBY_ENGINE == 'java'? It seems like my only option is to manually edit the gemspec file that jeweler generates to add the RUBY_ENGINE check. But I'd rather avoid this, since it kind of defeats the purpose of using jeweler in the first place. Any ideas?

    Read the article

  • Lazy Loading Association and Casting

    - by Zuber
    I am using NHibernate 2.0.1 and .NET I am facing issues with Lazy loading an association I have a BusinessObject class that has associations to other BusinessObject in it, and it can go deeper. The following function is in the BusinessObject to read the values of a collection in the BusinessObject. public virtual object GetFieldValue(string fieldName) { var fieldItems = fieldName.Split(AppConstants.DotChar); var objectToRead = this; for (var i = 0; i < fieldItems.Length - 1; i++) { objectToRead = (BusinessObject) objectToRead.GetFieldValue(fieldItems[i]); } //if (objectToRead._data == null) return objectToRead.SystemId + " Error: _data was null"; return objectToRead.FieldValue(fieldName.LastItem()); } The FieldValue function is described below private object FieldValue(string fieldName) { return _data.Contains(fieldName) ? _data[fieldName] : null; } The BusinessObject has a dictionary_data which stores the field values. Assume the fieldName is BusinessDriver.Description and the BusinessObject which has this field is StrategyBusinessDriver This code breaks down the field name into two - BusinessDriver & Description. The first iteration reads the BusinessDriver object from StrategyBusinessDriver. It is cast into a BusinessObject type so that I can call the GetFieldValue again on it to read the next field i.e Description in the BusinessDriver. The problem is that when I read the BusinessDriver in the first iteration and cast it, I get the Ids and all other details of the BusinessObject but the field dictionary _data and other collections are not fetched. This should be fetched lazily when I read the _data of the BusinessObject. However, this does not happen and I get an error that _data is null. Is there something wrongly coded because of which the collection is not fetched lazily? Please ask for more clarifications if needed. Thanks in advance. UPDATE: It works when I don't do Lazy load.

    Read the article

  • MySQL select using datetime, group by date only

    - by Matt
    Is is possible to select a datetime field from a MySQL table and group by the date only? I'm trying to output a list of events that happen at multiple times, grouped by the date it happened on. My table/data looks like this: (the timestamp is a datetime field) 1. 2010-03-21 18:00:00 Event1 2. 2010-03-21 18:30:00 Event2 3. 2010-03-30 13:00:00 Event3 4. 2010-03-30 14:00:00 Event4 I want to output something like this: March 21st 1800 - Event 1 1830 - Event 2 March 30th 1300 - Event 3 1400 - Event 4 Thanks!

    Read the article

  • Request for the permission of type 'System.Data.Odbc.OdbcPermission.. help needed

    - by Matt
    I'm getting the following error when trying to connect to a remote mysql server. Request for the permission of type 'System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've installed the odbc 5.1 driver, and can connect to the database using the Data Sources (ODBC) tool in Control Panel. However when I try and run my C# scrip to connect, I get the above error. I've read its something to do with trust levels or something? I don't quite understand what people were talking about though. I went to C:... Framework/v2.0.50727/CONFIG and added to the medium and high trust.config files, but that didn't help.. Can someone help me out here please? My connection string is MyConString = "DRIVER={MySQL ODBC 5.1 Driver};" + "SERVER=" + m_strHost + ";" + "PORT=3306;" + "DATABASE=" + m_strDatabase + ";" + "UID=" + m_strUserName + ";" + "PWD=" + m_strPassword + ";" + "OPTION=3;";

    Read the article

  • AudioQueue ate my buffer (first 15 milliseconds of it)

    - by iter
    I am generating audio programmatically. I hear gaps of silence between my buffers. When I hook my phone to a scope, I see that the first few samples of each buffer are missing, and in their place is silence. The length of this silence varies from almost nothing to as much as 20 ms. My first thought is that my original callback function takes too much time. I replace it with the shortest one possible--it re-renqueues the same buffer over and over. I observe the same behavior. AudioQueueRef aq; AudioQueueBufferRef aq_buffer; AudioStreamBasicDescription asbd; void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) { OSStatus s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); } void aq_init(void) { OSStatus s; asbd.mSampleRate = AUDIO_SAMPLES_PER_S; asbd.mFormatID = kAudioFormatLinearPCM; asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; asbd.mBytesPerPacket = 1; asbd.mFramesPerPacket = 1; asbd.mBytesPerFrame = 1; asbd.mChannelsPerFrame = 1; asbd.mBitsPerChannel = 8; asbd.mReserved = 0; int PPM_PACKETS_PER_SECOND = 50; // one buffer is as long as one PPM frame int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame; s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq); s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer); // put samples in the buffer buffer_data(my_data, aq_buffer); s = AudioQueueStart(aq, NULL); s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); }

    Read the article

  • How do I pass data from a BroadcastReceiver through to an Activity being started?

    - by Tom Hume
    I've got an Android application which needs to be woken up sporadically throughout the day. To do this, I'm using the AlarmManager to set up a PendingIntent and have this trigger a BroadcastReceiver. This BroadcastReceiver then starts an Activity to bring the UI to the foreground. All of the above seems to work, in that the Activity launches itself correctly; but I'd like the BroadcastReceiver to notify the Activity that it was started by the alarm (as opposed to being started by the user). To do this I'm trying, from the onReceive() method of the BroadcastReceiver to set a variable in the extras bundle of the intent, thus: Intent i = new Intent(context, MyActivity.class); i.putExtra(wakeupKey, true); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(i); In the onResume() method of my Activity, I then look for the existence of this boolean variable: protected void onResume() { super.onResume(); String wakeupKey = "blah"; if (getIntent()!=null && getIntent().getExtras()!=null) Log.d("app", "onResume at " + System.currentTimeMillis() + ":" + getIntent().getExtras().getBoolean(wakeupKey)); else Log.d("app", "onResume at " + System.currentTimeMillis() + ": null"); } The getIntent().getExtras() call in onResume() always returns null - I don't seem to be able to pass any extras through at all in this bundle. If I use the same method to bind extras to the PendingIntent which triggers the BroadcastReceiver however, the extras come through just fine. Can anyone tell me what's different about passing a bundle from a BroadcastReceiver to an Activity, as opposed to passing the bundle from an Activity to a BroadcastReceiver? I fear I may be doing something very very obvious wrong here...

    Read the article

  • Update table.column with another table.column with common joined column

    - by Matt
    Hit a speed bump, trying to update some column values in my table from another table. This is what is supposed to happen when everything works Correct all the city, state entries in tblWADonations by creating an update statement that moves the zip city from the joined city/state zip field to the tblWADonations city state TBL NAME | COLUMN NAMES tblZipcodes with zip,city,State tblWADonations with zip,oldcity,oldstate This is what I have so far: UPDATE tblWADonations SET oldCity = tblZipCodes.city, oldState = tblZipCodes.state FROM tblWADonations INNER JOIN tblZipCodes ON tblWADonations.zip = tblZipCodes.zip Where oldCity <> tblZipcodes.city; There seems to be easy ways to do this online but I am overlooking something. Tried this by hand and in editor this is what it kicks back. Msg 8152, Level 16, State 2, Line 1 String or binary data would be truncated. The statement has been terminated. Please include a sql statement or where I need to make the edit so I can mark this post as a reference in my favorites. Thanks!

    Read the article

  • JavaScript types

    - by Alex Ivasyuv
    Hi, as per http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf JavaScript has 6 types: undefined, null, boolean, string, number, object. var und; console.log(typeof und); // <-- undefined var n = null; console.log(typeof n); // <--- **object**! var b = true; console.log(typeof b); // <-- boolean var str = "myString" console.log(typeof str); // <-- string var int = 10; console.log(typeof int); // <-- number var obj = {} console.log(typeof obj); // <-- object Question 1: Why null is object type, if it has to be a null type. Question 2: What about function? var f = function() {}; console.log(typeof f); // <-- function Variable f has "function" type. Why it doesn't specified in specification as separate type. Thanks,

    Read the article

  • Is there any danger in calling free() or delete instead of delete[]? [closed]

    - by Matt Joiner
    Possible Duplicate: ( POD )freeing memory : is delete[] equal to delete ? Does delete deallocate the elements beyond the first in an array? char *s = new char[n]; delete s; Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array? For more complex types, would delete call the destructor of objects beyond the first one? Object *p = new Object[n]; delete p; How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more. For primitive types, such as char, int, is there any difference between: int *p = new int[n]; delete p; delete[] p; free p; Except for the routes taken by the respective calls through the delete-free deallocation machinery?

    Read the article

  • WPF Animation / Processing priority

    - by Matt B
    Hi all, I have a button which has an animation (in xaml) on it's click event. Cool so far. Problem is that I also have processing occurring on the click event (so I can do stuff) - and this occurs first. How do I prioritise or re-order so that the animation takes place before any custom processing... Thanks.

    Read the article

  • "Programming In Haskell" error in sat function

    - by Matt Ellen
    I'm in chapter 8 of Graham Hutton's Programming in Haskell and I'm copying the code and testing it in GHC. See the slides here: http://www.cis.syr.edu/~sueo/cis352/chapter8.pdf in particular slide 15 The relevant code I've copied so far is: type Parser a = String -> [(a, String)] pih_return :: a -> Parser a pih_return v = \inp -> [(v, inp)] failure :: Parser a failure = \inp -> [] item :: Parser Char item = \inp -> case inp of [] -> [] (x:xs) -> [(x,xs)] parse :: Parser a -> String -> [(a, String)] parse p inp = p inp sat :: (Char -> Bool) -> Parser Char sat p = do x <- item if p x then pih_return x else failure I have changed the name of the return function from the book to pih_return so that it doesn't clash with the Prelude return function. The errors are in the last function sat. I have copied this directly from the book. As you can probably see p is a function from Char to Bool (e.g. isDigit) and x is of type [(Char, String)], so that's the first error. Then pih_return takes a value v and returns [(v, inp)] where inp is a String. This causes an error in sat because the v being passed is x which is not a Char. I have come up with this solution, by explicitly including inp into sat sat :: (Char -> Bool) -> Parser Char sat p inp = do x <- item inp if p (fst x) then pih_return (fst x) inp else failure inp Is this the best way to solve the issue?

    Read the article

  • AVL tree in C language

    - by I_S_W
    Hey all; i am currently doing a project that requires the use of AVL trees , the insert function i wrote for the avl does not seem to be working , it works for 3 or 4 nodes at maximum ; i would really appreciate your help The attempt is below enter code here Tree insert(Tree t,char name[80],int num) { if(t==NULL) { t=(Tree)malloc(sizeof(struct node)); if(t!=NULL) { strcpy(t->name,name); t->num=num; t->left=NULL; t->right=NULL; t->height=0; } } else if(strcmp(name,t->name)<0) { t->left=insert(t->left,name,num); if((height(t->left)-height(t->right))==2) if(strcmp(name,t->left->name)<0) { t=s_rotate_left(t);} else{ t=d_rotate_left(t);} } else if(strcmp(name,t-name)0) { t-right=insert(t-right,name,num); if((height(t-right)-height(t-left))==2) if(strcmp(name,t-right-name)0){ t=s_rotate_right(t); } else{ t=d_rotate_right(t);} } t-height=max(height(t-left),height(t-right))+1; return t; }

    Read the article

  • C file read leaves garbage characters

    - by KJ
    Hi. I'm trying to read the contents of a file into my program but I keep occasionally getting garbage characters at the end of the buffers. I haven't been using C a lot (rather I've been using C++) but I assume it has something to do with streams. I don't really know what to do though. I'm using MinGW. Here is the code (this gives me garbage at the end of the second read): include include char* filetobuf(char *file) { FILE *fptr; long length; char *buf; fptr = fopen(file, "r"); /* Open file for reading */ if (!fptr) /* Return NULL on failure */ return NULL; fseek(fptr, 0, SEEK_END); /* Seek to the end of the file */ length = ftell(fptr); /* Find out how many bytes into the file we are */ buf = (char*)malloc(length+1); /* Allocate a buffer for the entire length of the file and a null terminator */ fseek(fptr, 0, SEEK_SET); /* Go back to the beginning of the file */ fread(buf, length, 1, fptr); /* Read the contents of the file in to the buffer */ fclose(fptr); /* Close the file */ buf[length] = 0; /* Null terminator */ return buf; /* Return the buffer */ } int main() { char* vs; char* fs; vs = filetobuf("testshader.vs"); fs = filetobuf("testshader.fs"); printf("%s\n\n\n%s", vs, fs); free(vs); free(fs); return 0; } The filetobuf function is from this example http://www.opengl.org/wiki/Tutorial2:_VAOs,_VBOs,_Vertex_and_Fragment_Shaders_%28C_/_SDL%29. It seems right to me though. So anyway, what's up with that?

    Read the article

  • Forking with Pipes

    - by Luke
    Hello I have tried to do fork() and piping in main and it works perfectly fine but when I try to implement it in a function for some reason I don't get any output, this is my code: void cmd(int **pipefd,int count,int type, int last); int main(int argc, char *argv[]) { int pipefd[3][2]; int i, total_cmds = 3,count = 0; int in = 1; for(i = 0; i < total_cmds;i++){ pipe(pipefd[count++]); cmd(pipefd,count,i,0); } /*Last Command*/ cmd(pipefd,count,i,1); exit(EXIT_SUCCESS); } void cmd(int **pipefd,int count,int type, int last){ int child_pid,i,i2; if ((child_pid = fork()) == 0) { if(count == 1){ dup2(pipefd[count-1][1],1); /*first command*/ } else if(last!=0){ dup2(pipefd[count - 2][0],0); /*middle commands*/ dup2(pipefd[count - 1][1],1); } else if(last == 1){ dup2(pipefd[count - 1][0],0); /*last command*/ } for(i = 0; i < count;i++){/*close pipes*/ for(i2 = 0; i2 < 2;i2++){ close(pipefd[i][i2]); }} if(type == 0){ execlp("ls","ls","-al",NULL); } else if(type == 1){ execlp("grep","grep",".bak",NULL); } else if(type==2){ execl("/usr/bin/wc","wc",NULL); } else if(type ==3){ execl("/usr/bin/wc","wc","-l",NULL); } perror("exec"); exit(EXIT_FAILURE); } else if (child_pid < 0) { perror("fork"); exit(EXIT_FAILURE); } } I checked the file descriptors and it is opening the right ones, not sure what the problem could be..

    Read the article

  • Trouble with inheritance

    - by Matt
    I'm relatively new to programming so excuse me if I get some terms wrong (I've learned the concepts, I just haven't actually used most of them). Trouble: I currently have a class I'll call Bob its parent class is Cody, Cody has method call Foo(). I want Bob to have the Foo() method as well, except with a few extra lines of code. I've attempted to do Foo() : base(), however that doesn't seem to work like. Is there some simple solution to this?

    Read the article

  • Cascading Deletes in SQL Sever 2008 not working.

    - by Vaccano
    I have the following table setup. Bag | +-> BagID (Guid) +-> BagNumber (Int) BagCommentRelation | +-> BagID (Int) +-> CommentID (Guid) BagComment | +-> CommentID (Guid) +-> Text (varchar(200)) BagCommentRelation has Foreign Keys to Bag and BagComment. So, I turned on cascading deletes for both those Foreign Keys, but when I delete a bag, it does not delete the Comment row. Do need to break out a trigger for this? Or am I missing something? (I am using SQL Server 2008) Note: Posting requested SQL. This is the defintion of the BagCommentRelation table. (I had the type of the bagID wrong (I thought it was a guid but it is an int).) CREATE TABLE [dbo].[Bag_CommentRelation]( [Id] [int] IDENTITY(1,1) NOT NULL, [BagId] [int] NOT NULL, [Sequence] [int] NOT NULL, [CommentId] [int] NOT NULL, CONSTRAINT [PK_Bag_CommentRelation] PRIMARY KEY CLUSTERED ( [BagId] ASC, [Sequence] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Bag] FOREIGN KEY([BagId]) REFERENCES [dbo].[Bag] ([Id]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Bag] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Comment] FOREIGN KEY([CommentId]) REFERENCES [dbo].[Comment] ([CommentId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Comment] GO The row in this table deletes but the row in the comment table does not.

    Read the article

  • Linq to SQL Error with Many-to-Many table

    - by Matt Connolly
    I am getting the following error with linq-to-sql when I try to access a many-to-many collection: Members 'Int32 XXX' and 'Int32 YYY' both marked as IsPrimaryKey and IsDbGenerated. The statement is true, in that both of those columns are primary key integers with identity insert. The table I am trying to access has a foreign key to both YYY and ZZZ, and then ZZZ has a foreign key to XXX. There doesn't appear to be anything wrong with my data structure. I tried setting "Child Property" to false on the ZZZ-YYY relationship, but it didn't change anything.

    Read the article

  • WinForms Load Event / Static Initialization Strangeness

    - by Eric J.
    Background I'm troubleshooting an WinForms 2.0 program that's already been burned to CD for distribution to an internet-challenged target audience. Some users are experiencing a fatal error that I can reproduce locally. Reproducing the Error I get the fatal error when I log into my Vista box using a standard user that I just created, even if I run the program as administrator. I do not get the fatal error when I log in as local administrator. I'm not sure that being administrator is necessarily the trigger (since runas did not help). I have reproduced this half a dozen times under each account with consistent results. The faulty code Base.cs (base class for several user controls, only one of which is shown on first screen) private void BaseWindow_Load(object sender, EventArgs e) { // This message shown once in both cases MessageBox.Show("BaseWindow_Load for " + this.GetType().FullName); SkinManager.ApplySkin(this); } SkinManager.cs private static Skin skin = null; public static void ApplySkin(UserControl applyTo) { if (skin == null) { skin = new Skin(SkinsDirectory, "Default"); } } Skin.cs internal Skin(string skinPath, string skinName) { config = SkinConfig.Load(path); } SkinConfig.cs public static SkinConfig Load(string path) { // This message shown only once running as Admin but twice running as standard user System.Windows.Forms.MessageBox.Show("@1"); // !!! LOCK path HERE !!! } A user control loads on the first form, which triggers a call to SkinManager.ApplySkin, which checks if skin is null and, if so assigns it (without thread synchronization or recursion protection), which ultimately causes a file to be opened. When logged in as local admin, that sequence completes just fine. When logged in as my test standard user, ApplySkin is always called a second time while skin is still null, causing a second attempt to load, causing the file to be locked on the second attempt. The error handling is draconian at this point and the program terminates. The Question While this code can be easily fixed, I would like to understand why the error is happening only in some cases.

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • saxon java xpath exception :

    - by changed
    Hi I am trying to read XPath queries from a file and run it against a xml data file. I am doing it through Saxon Xpath processor. But i keep getting this error. Code i am using is from sample code provided with the library. Error i am getting is : Exception in thread "main" javax.xml.xpath.XPathExpressionException: Supplied node must be built using the same or a compatible Configuration at net.sf.saxon.xpath.XPathExpressionImpl.evaluate(XPathExpressionImpl.java:283) at net.sf.saxon.xpath.XPathExpressionImpl.evaluate(XPathExpressionImpl.java:375) at edu.xml.XPathExample.go(XPathExample.java:81) at edu.xml.XPathExample.main(XPathExample.java:42) - public void go(String filename) throws Exception { XPathExpression findLine = null; String readLine = null; Object document = new XPathEvaluator().setSource(new StreamSource(new File(dataPath))); System.setProperty("javax.xml.xpath.XPathFactory:" + NamespaceConstant.OBJECT_MODEL_SAXON, "net.sf.saxon.xpath.XPathFactoryImpl"); XPathFactory xpf = XPathFactory.newInstance(NamespaceConstant.OBJECT_MODEL_SAXON); XPath xpe = xpf.newXPath(); if (filename != null) { this.queryPath = filename; } try { fileReader = new BufferedReader(new FileReader(queryPath)); } catch (FileNotFoundException ex) { System.out.println("File not found: " + ex.getMessage()); } xpe.setXPathVariableResolver(this); while (true) { try { readLine = fileReader.readLine(); } catch (IOException ex) { System.out.println("Error while reading file: " + ex.getMessage()); } if (readLine == null) { break; } findLine = xpe.compile(readLine); String resultString = findLine.evaluate(document); System.out.println("<output>"); System.out.println("String Value => " + resultString); System.out.println("</output>"); } }

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >