Search Results

Search found 10789 results on 432 pages for 'cpu upgrade'.

Page 410/432 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Thread.CurrentThread.CurrentUICulture not working consistently

    - by xTRUMANx
    I've been working on a pet project on the weekends to learn more about C# and have encountered an odd problem when working with localization. To be more specific, the problem I have is with System.Threading.Thread.CurrentThread.CurrentUICulture. I've set up my app so that the user can quickly change the language of the app by clicking a menu item. The menu item in turn, saves the two-letter code for the language (e.g. "en", "fr", etc.) in a user setting called 'Language' and then restarts the application. Properties.Settings.Default.Language = "en"; Properties.Settings.Default.Save(); Application.Restart(); When the application is started up, the first line of code in the Form's constructor (even before InitializeComponent()) fetches the Language string from the settings and sets the CurrentUICulture like so: public Form1() { Thread.CurrentThread.CurrentUICulture = new CultureInfo(Properties.Settings.Default.Language); InitializeComponent(); } The thing is, this doesn't work consistently. Sometimes, all works well and the application loads the correct language based on the string saved in the settings file. Other times, it doesn't, and the language remains the same after the application is restarted. At first I thought that I didn't save the language before restarting the application but that is definitely not the case. When the correct language fails to load, if I were to close the application and run it again, the correct language would come up correctly. So this implies that the Language string has been saved but the CurrentUICulture assignment in my form constructor is having no effect sometimes. Any help? Is there something I'm missing of how threading works in C#? This could be machine-specific, so if it makes any difference I'm using Pentium Dual-Core CPU. UPDATE Vlad asked me to check what the CurrentThread's CurrentUICulture is. So I added a MessageBox on my constructor to tell me what the CurrentUICulture two-letter code is as well as the value of my Language user string. MessageBox.Show(string.Format("Current Language: {0}\nCurrent UI Culture: {1}", Properties.Settings.Default.Language, Thread.CurrentThread.CurrentUICulture.TwoLetterISOLanguageName)); When the wrong language is loaded, both the Language string and CurrentUICulture have the wrong language. So I guess the CurrentUICulture has been cleared and my problem is actually with the Language Setting. So I guess the problem is that my application sometimes loads the previously saved language string rather than the last saved language string. If the app is restarted, it will then load the actual saved language string.

    Read the article

  • Excel UDF calculation should return 'original' value

    - by LeChe
    Hi all, I've been struggling with a VBA problem for a while now and I'll try to explain it as thoroughly as possible. I have created a VSTO plugin with my own RTD implementation that I am calling from my Excel sheets. To avoid having to use the full-fledged RTD syntax in the cells, I have created a UDF that hides that API from the sheet. The RTD server I created can be enabled and disabled through a button in a custom Ribbon component. The behavior I want to achieve is as follows: If the server is disabled and a reference to my function is entered in a cell, I want the cell to display Disabled If the server is disabled, but the function had been entered in a cell when it was enabled (and the cell thus displays a value), I want the cell to keep displaying that value If the server is enabled, I want the cell to display Loading Sounds easy enough. Here is an example of the - non functional - code: Public Function RetrieveData(id as Long) Dim result as String // This returns either 'Disabled' or 'Loading' result = Application.Worksheet.Function.RTD("SERVERNAME", "", id) RetrieveData = result If(result = "Disabled") Then // Obviously, this recurses (and fails), so that's not an option If(Not IsEmpty(Application.Caller.Value2)) Then // So does this RetrieveData = Application.Caller.Value2 End If End If End Function The function will be called in thousands of cells, so storing the 'original' values in another data structure would be a major overhead and I would like to avoid it. Also, the RTD server does not know the values, since it also does not keep a history of it, more or less for the same reason. I was thinking that there might be some way to exit the function which would force it to not change the displayed value, but so far I have been unable to find anything like that. Any ideas on how to solve this are greatly appreciated! Thanks, Che EDIT: By popular demand, some additional info on why I want to do all this: As I said, the function will be called in thousands of cells and the RTD server needs to retrieve quite a bit of information. This can be quite hard on both network and CPU. To allow the user to decide for himself whether he wants this load on his machine, he or she can disable the updates from the server. In that case, he or she should still be able to calculate the sheets with the values currently in the fields, yet no updates are pushed into them. Once new data is required, the server can be enabled and the fields will be updated. Again, since we are talking about quite a bit of data here, I would rather not store it somewhere in the sheet. Plus, the data should be usable even if the workbook is closed and loaded again.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • response.redirect to classic asp failing {Unable to evaluate expression because the code is optimize

    - by jeff
    I have the following code pasted below. For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. Please help as it's driving me insane! It's so strange, it runs locally on my machine but won't run on my server! I am getting the following error when I debugged remotely. {Unable to evaluate expression because the code is optimized or a native frame is on top of the call stack.} (UPDATED) After debugging remotely and taking the redirect out of the try catch, I have found that the redirect is trying to get to the correct location but after it leaves the redirect is just seems to get lost. (almost as if it can't navigate away from the cobra_import project) back up a level to COBRA/pages. Why is this??? This has worked previously!!! public void btnUploadTheFile_Click(object Source, EventArgs evArgs) { //need to check that the uploaded file is an xls file. string strFileNameOnServer = "PJI3.txt"; string strBaseLocation = ConfigurationSettings.AppSettings["str_file_location"]; if ("" == strFileNameOnServer) { txtOutput.InnerHtml = "Error - a file name must be specified."; return; } if (null != uplTheFile.PostedFile) { try { uplTheFile.PostedFile.SaveAs(strBaseLocation+strFileNameOnServer); txtOutput.InnerHtml = "File <b>" + strBaseLocation+strFileNameOnServer+"</b> uploaded successfully"; Response.Redirect ("/COBRA/pages/sap_import_pji3_prc.asp"); } catch (Exception e) { txtOutput.InnerHtml = "Error saving <b>" + strBaseLocation+strFileNameOnServer+"</b><br>"+ e.ToString(); } } }

    Read the article

  • segfault on vector<struct>

    - by Andre
    Hello, I created a struct to hold some data and then declared a vector to hold that struct. But when I do a push_back I get damn segfault and I have no idea why! My struct is defines as: typedef struct Group { int codigo; string name; int deleted; int printers; int subpage; /*included this when it started segfaulting*/ Group(){ name.reserve(MAX_PRODUCT_LONG_NAME); } ~Group(){ name.clear(); } Group(const Group &b) { codigo = b.codigo; name = b.name; deleted = b.deleted; printers = b.printers; subpage = b.subpage; } /*end of new stuff*/ }; Originally, the struct didn't have the copy, constructor or destructor. I added them latter when I read this post below. http://stackoverflow.com/questions/676575/seg-fault-after-is-item-pushed-onto-stl-container but the end result is the same. There is one this that is bothering me as hell! When I first push some data into the vector, everything goes fine. Later on in the code when I try to push some more data into the vector, my app just segfaults! The vector is declared vector<Group> Groups and is a global variable to the file where I am using it. No externs anywhere else, etc... I can trace the error to: _M_deallocate(this->_M_impl._M_start, this->_M_impl._M_end_of_storage- this->_M_impl._M_start); in vector.tcc when I finish adding/copying the last element to the vector.... As far as I can tell. I shouldn't be needing anything to do with a copy constructor as a shallow copy should be enough for this. I'm not even allocating any space (but I did a reserve for the string to try out). I have no idea what the problem is! I'm running this code on OpenSuse 10.2 with gcc 4.1.2 I'm not really to eager to upgrade gcc because of backward compatibility issues... This code worked "perfectly" on my windows machine. I compiled it with gcc 3.4.5 mingw without any problems... help! --- ... --- :::EDIT::: I push data Group tmp_grp; (...) tmp_grp.name = "Nova "; tmp_grp.codigo=GetGroupnextcode(); tmp_grp.deleted=0; tmp_grp.printers=0; tmp_grp.subpage=0; Groups.push_back(tmp_grp);

    Read the article

  • deleted gen folder, eclipse isn't generating it now :(

    - by LuxuryMode
    I accidentally deleted my gen folder and now, predictably, my resources are all messed up. I just created a gen folder myself and tried to project clean - that didn't work. Tried right-clicking project and going to android tools fix project properties - didn't work. Tried unchecking build automatically...didn't work. cleaned, closed project, closed eclipse, restarted, etc, etc. Nothing is working and I keep seeing this error: gen already exists but is not a source folder. Convert to a source folder or rename it. EDIT - OK was able to generate R.java, but now I'm getting crazy stuff in the console: [2011-06-14 17:06:11 - fastapp] Conversion to Dalvik format failed with error 1 [2011-06-14 17:06:42 - fastapp] Dx trouble processing "java/awt/font/NumericShaper.class": Ill-advised or mistaken usage of a core class (java.* or javax.*) when not building a core library. This is often due to inadvertently including a core library file in your application's project, when using an IDE (such as Eclipse). If you are sure you're not intentionally defining a core class, then this is the most likely explanation of what's going on. However, you might actually be trying to define a class in a core namespace, the source of which you may have taken, for example, from a non-Android virtual machine project. This will most assuredly not work. At a minimum, it jeopardizes the compatibility of your app with future versions of the platform. It is also often of questionable legality. If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine distribution, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message. If you go ahead and use "--core-library" but are in fact building an application, then be forewarned that your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem. If you are legitimately using some code that happens to be in a core package, then the easiest safe alternative you have is to repackage that code. That is, move the classes in question into your own package namespace. This means that they will never be in conflict with core system classes. JarJar is a tool that may help you in this endeavor. If you find that you cannot do this, then that is an indication that the path you are on will ultimately lead to pain, suffering, grief, and lamentation. [2011-06-14 17:06:42 - fastapp] Dx 1 error; aborting [2011-06-14 17:06:42 - fastapp] Conversion to Dalvik format failed with error 1 And eclipse can't resolve the import of my resources import com.me.fastapp.R;

    Read the article

  • How to do the processing and keep GUI refreshed using databinding?

    - by macias
    History of the problem This is continuation of my previous question How to start a thread to keep GUI refreshed? but since Jon shed new light on the problem, I would have to completely rewrite original question, which would make that topic unreadable. So, new, very specific question. The problem Two pieces: CPU hungry heavy-weight processing as a library (back-end) WPF GUI with databinding which serves as monitor for the processing (front-end) Current situation -- library sends so many notifications about data changes that despite it works within its own thread it completely jams WPF data binding mechanism, and in result not only monitoring the data does not work (it is not refreshed) but entire GUI is frozen while processing the data. The aim -- well-designed, polished way to keep GUI up to date -- I am not saying it should display the data immediately (it can skip some changes even), but it cannot freeze while doing computation. Example This is simplified example, but it shows the problem. XAML part: <StackPanel Orientation="Vertical"> <Button Click="Button_Click">Start</Button> <TextBlock Text="{Binding Path=Counter}"/> </StackPanel> C# part (please NOTE this is one piece code, but there are two sections of it): public partial class MainWindow : Window,INotifyPropertyChanged { // GUI part public MainWindow() { InitializeComponent(); DataContext = this; } private void Button_Click(object sender, RoutedEventArgs e) { var thread = new Thread(doProcessing); thread.IsBackground = true; thread.Start(); } // this is non-GUI part -- do not mess with GUI here public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string property_name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(property_name)); } long counter; public long Counter { get { return counter; } set { if (counter != value) { counter = value; OnPropertyChanged("Counter"); } } } void doProcessing() { var tmp = 10000.0; for (Counter = 0; Counter < 10000000; ++Counter) { if (Counter % 2 == 0) tmp = Math.Sqrt(tmp); else tmp = Math.Pow(tmp, 2.0); } } } Known workarounds (Please do not repost them as answers) Those two first are based on Jon ideas: pass GUI dispatcher to library and use it for sending notifications -- why it is ugly? because it could be no GUI at all give up with data binding COMPLETELY (one widget with databinding is enough for jamming), and instead check from time to time data and update the GUI manually -- well, I didn't learn WPF just to give up with it now ;-) and this is mine, it is ugly, but simplicity of it kills -- before sending notification freeze a thread -- Thread.Sleep(1) -- to let the potential receiver "breathe" -- it works, it is minimalistic, it is ugly though, and it ALWAYS slows down computation even if no GUI is there So... I am all ears for real solutions, not some tricks.

    Read the article

  • NSOperation inside NSOperationQueue not being executed

    - by Martin Garcia
    I really need help here. I'm desperate at this point. I have NSOperation that when added to the NSOperationQueue is not being triggered. I added some logging to see the NSOperation status and this is the result: Queue operations count = 1 Queue isSuspended = 0 Operation isCancelled? = 0 Operation isConcurrent? = 0 Operation isFinished? = 0 Operation isExecuted? = 0 Operation isReady? = 1 Operation dependencies? = 0 The code is very simple. Nothing special. LoadingConflictEvents_iPad *loadingEvents = [[LoadingConflictEvents_iPad alloc] initWithNibName:@"LoadingConflictEvents_iPad" bundle:[NSBundle mainBundle]]; loadingEvents.modalPresentationStyle = UIModalPresentationFormSheet; loadingEvents.conflictOpDelegate = self; [self presentModalViewController:loadingEvents animated:NO]; [loadingEvents release]; ConflictEventOperation *operation = [[ConflictEventOperation alloc] initWithParameters:wiLr.formNumber pWI_ID:wiLr.wi_id]; [queue addOperation:operation]; NSLog(@"Queue operations count = %d",[queue operationCount]); NSLog(@"Queue isSuspended = %d",[queue isSuspended]); NSLog(@"Operation isCancelled? = %d",[operation isCancelled]); NSLog(@"Operation isConcurrent? = %d",[operation isConcurrent]); NSLog(@"Operation isFinished? = %d",[operation isFinished]); NSLog(@"Operation isExecuted? = %d",[operation isExecuting]); NSLog(@"Operation isReady? = %d",[operation isReady]); NSLog(@"Operation dependencies? = %d",[[operation dependencies] count]); [operation release]; Now my operation do many things on the main method, but the problem is never being called. The main is never executed. The most weird thing (believe me, I'm not crazy .. yet). If I put a break point in any NSLog line or in the creation of the operation the main method will be called and everything will work perfectly. This have been working fine for a long time. I have been making some changes recently and apparently something screw things up. One of those changes was to upgrade the device to iOS 5.1 SDK (iPad). To add something, I have the iPhone (iOS 5.1) version of this application that use the same NSOperation object. The difference is in the UI only, and everything works fine. Any help will be really appreciated. Regards,

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • iPhone Debugger Message -- Weird

    - by Bill Shiff
    Hello, I have an iPhone app that I've been working on and have recently upgraded my version of XCode. Since the upgrade, I can build and debug in the iPhone Simulator just fine, but when I try to debug on an attached device I get the following messages: From Xcode4: GNU gdb 6.3.50-20050815 (Apple version gdb-1510) (Fri Oct 22 04:12:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 sharedlibrary apply-load-rules all warning: Unable to read symbols from "dyld" (prefix __dyld_) (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MessageUI.framework/MessageUI (file not found). warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/MapKit.framework/MapKit (file not found). warning: Unable to read symbols from "MapKit" (not yet mapped into memory). warning: Unable to read symbols from "Foundation" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/UIKit.framework/UIKit (file not found). warning: Unable to read symbols from "UIKit" (not yet mapped into memory). warning: Unable to read symbols for (null)/Library/Frameworks/CoreGraphics.framework/CoreGraphics (file not found). warning: Unable to read symbols from "CoreGraphics" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). warning: Unable to read symbols from "QuartzCore" (not yet mapped into memory). warning: Unable to read symbols from "libgcc_s.1.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libSystem.B.dylib" (not yet mapped into memory). warning: Unable to read symbols from "libobjc.A.dylib" (not yet mapped into memory). warning: Unable to read symbols from "CoreFoundation" (not yet mapped into memory). target remote-mobile /tmp/.XcodeGDBRemote-3836-28 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none [Switching to thread 11523] [Switching to thread 11523] gdb stack crawl at point of internal error: 0 gdb-arm-apple-darwin 0x0013216e internal_vproblem + 316

    Read the article

  • libXcodeDebuggerSupport.dylib is missing in iOS 4.2.1 development SDK

    - by Kalle
    Note: creating a symbolic link to use the 4.2 lib seems to work fine -- maybe cd /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/ sudo ln -s ../../4.2 (8C134)/Symbols/Developer Request: See end of this question! After upgrading from 4.2.0 (beta, I believe) to 4.2.1, the libXcodeDebuggerSupport.dylib file is missing, which results in: warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib (file not found). which I guess isn't good. Looking at the directory in question I note: .../DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib but .../DeviceSupport/4.2.1 (8C148)/Symbols/System/ .../DeviceSupport/4.2.1 (8C148)/Symbols/usr/ the above two dirs make up all the content in the 4.2.1 folder. No "Developer" folder. Checking the /usr/ dir there, I find no libXcodeDebuggerSupport.dylib file in the lib dir either, so ln -s'ing isn't an option. Worth mentioning: after the upgrade, I plugged the iPad in and had to click "Use for development" in Xcode organizer. Doing so, I got a message about symbols missing for that version, and Xcode proceeded to generate such, then failed. I restored the iPad and did "Use for development" again, and nothing about missing symbols appeared... Update: deletion of /Developer and reinstallation of Xcode from scratch does not fix this issue. Update 2: I just realized that after the reinstall of Xcode, .../DeviceSupport/4.2 (8C134)/Symbols is now a symbolic link, lrwxr-xr-x 1 root admin 36 Dec 3 17:17 Symbols -> ../../Developer/SDKs/iPhoneOS4.2.sdk And the directory in question has the appropriate files. Maybe this is simply a matter of linking the 4.2.1 dir in the same fashion? I'll try that and see if Xcode freaks out. If someone who has this file could provide a md5 sum that would be splendid. This is what it says for me: $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2\ \(8C134\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib MD5 (/Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2 (8C134)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib) = 08f93a0a2e3b03feaae732691f112688 If the MD5 sum is identical to the output of $ md5 /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1\ \(8C148\)/Symbols/Developer/usr/lib/libXcodeDebuggerSupport.dylib then we're all set.

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • C# DLL Deployed in COM+. Error while accessing the methods.

    - by Dakshinamurthy
    I have the C# Dll (ABService) deployed in COM + and my os Is windows 2008. I have given the strong name for this dll and its dependent dll’s When I access the method of this dll through localhost or if I add the reference to the client project the method are executed successfully. Simply if I access the dll from the same machine with the reference it is working. So I think there is no problem with the way I deployed in the COM +. I have the doubt whether I have the problem in OS and Visual Studio 2008 combination. I have built all the dll with the Visual Studio 2008 with Target cpu as x86 and targert framework as 2.0. I have given in below the codes I have tried with and the errors. I need to create the object for the dll in server machine(64 bit) and access its method from the client(32 bit) Code : Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],false); ABService.Service service1= (ABService.Service)Activator.CreateInstance(svr); strresult = service1.ExecuteService(orequest.xml); Error :{"Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine 10.105.138.64 failed due to the following error: 80040154."} Code Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],true); object Service1 = null; Service1 = (ABService.Service)Activator.CreateInstance(svr, true); strresult = Convert.ToString(ReflectionHelper.Invoke(Service1, "ExecuteService", new object[] { orequest.xml })); Service1 = null; Error: Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine ftpsite failed due to the following error: 80040154. With the below code instead of C#.Net dll if i have the vb dll in Com + the method is executed successfully. Code ords = new RDS.DataSpace(); ords.InternetTimeout = 600000; object M_Service = null; ABService.Service oabservice = null; M_Service = ords.CreateObject("ABService.Service",url); strresult = Convert.ToString(ReflectionHelper.Invoke(M_Service, "ExecuteService", new object[] { orequest.xml })); Error : {"Object doesn't support this property or method 'ExecuteService'"} Code object obj=Interaction.CreateObject("ABService.Service", "10.105.138.64"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: {"Cannot create ActiveX component."} Code object obj = Activator.GetObject(typeof(ABService.Service), @"http://10.105.138.64:80/ABANET"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: InnerException {"The remote server returned an error: (405) Method Not Allowed."} System.Exception {System.Net.WebException} Message "Exception has been thrown by the target of an invocation.”

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • What common routines do you put in your Program.cs for C#

    - by Rick
    I'm interested in any common routine/procedures/methods that you might use in you Program.cs when creating a .NET project. For instance I commonly use the following code in my desktop applications to allow easy upgrades, single instance execution and friendly and simple reporting of uncaught system application errors. using System; using System.Diagnostics; using System.Threading; using System.Windows.Forms; namespace NameoftheAssembly { internal static class Program { /// <summary> /// The main entry point for the application. Modified to check for another running instance on the same computer and to catch and report any errors not explicitly checked for. /// </summary> [STAThread] private static void Main() { //for upgrading and installing newer versions string[] arguments = Environment.GetCommandLineArgs(); if (arguments.GetUpperBound(0) > 0) { foreach (string argument in arguments) { if (argument.Split('=')[0].ToLower().Equals("/u")) { string guid = argument.Split('=')[1]; string path = Environment.GetFolderPath(Environment.SpecialFolder.System); var si = new ProcessStartInfo(path + "\\msiexec.exe", "/x" + guid); Process.Start(si); Application.Exit(); } } //end of upgrade } else { bool onlyInstance = false; var mutex = new Mutex(true, Application.ProductName, out onlyInstance); if (!onlyInstance) { MessageBox.Show("Another copy of this running"); return; } AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException; Application.ThreadException += ApplicationThreadException; Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } } private static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { try { var ex = (Exception) e.ExceptionObject; MessageBox.Show("Whoops! Please contact the developers with the following" + " information:\n\n" + ex.Message + ex.StackTrace, " Fatal Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); } catch (Exception) { //do nothing - Another Exception! Wow not a good thing. } finally { Application.Exit(); } } public static void ApplicationThreadException(object sender, ThreadExceptionEventArgs e) { try { MessageBox.Show("Whoops! Please contact the developers with the following" + " information:\n\n" + e.Exception.Message + e.Exception.StackTrace, " Error", MessageBoxButtons.OK, MessageBoxIcon.Stop); } catch (Exception) { //do nothing - Another Exception! Wow not a good thing. } } } } I find these routines to be very helpful. What methods have you found helpful in Program.cs?

    Read the article

  • C++ DLL creation for C# project - No functions exported

    - by Yeti
    I am working on a project that requires some image processing. The front end of the program is C# (cause the guys thought it is a lot simpler to make the UI in it). However, as the image processing part needs a lot of CPU juice I am making this part in C++. The idea is to link it to the C# project and just call a function from a DLL to make the image processing part and allow to the C# environment to process the data afterwards. Now the only problem is that it seems I am not able to make the DLL. Simply put the compiler refuses to put any function into the DLL that I compile. Because the project requires some development time testing I have created two projects into a C++ solution. One is for the Dll and another console application. The console project holds all the files and I just include the corresponding header into my DLL project file. I thought the compiler should take out the functions that I marked as to be exported and make the DLL from them. Nevertheless this does not happens. Here it is how I defined the function in the header: extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck); extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI, CvScalar &refHSVColorLow, CvScalar &refHSVColorHi ); Followed by the implementation in the cpp file: extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI,&refHSVColorLow, CvScalar &refHSVColorHi ) { \\... return cvPoint((int)( M10/M00) + imgROI.x, (int)( M01/M00 ) + imgROI.y) ;} extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck) { \\ ...}; And my main file for the DLL project looks like: #ifdef _MANAGED #pragma managed(push, off) #endif /// <summary> Include files. </summary> #include "..\ImageProcessingDebug\ImageProcessingTest.h" #include "..\ImageProcessingDebug\ImageProcessing.h" BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { return TRUE; } #ifdef _MANAGED #pragma managed(pop) #endif Needless to say it does not work. A quick look with DLL export viewer 1.36 reveals that no function is inside the library. I don't get it. What I am doing wrong ? As side not I am using the C++ objects (and here it is the C++ DLL part) such as the vector. However, only for internal usage. These will not appear in the headers of either function as you can observe from the previous code snippets. Any ideas? Thx, Bernat

    Read the article

  • How to interpret kernel panics?

    - by Owen
    Hi all, I'm new to linux kernel and could barely understand how to debug kernel panics. I have this error below and I don't know where in the C code should I start checking. I was thinking maybe I could echo what functions are being called so I could check where in the code is this null pointer dereferenced. What print function should I use ? How do you interpret the error message below? Unable to handle kernel NULL pointer dereference at virtual address 0000000d pgd = c7bdc000 [0000000d] *pgd=4785f031, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT Modules linked in: bcm5892_secdom_fw(P) bcm5892_lcd snd_bcm5892 msr bcm5892_sci bcm589x_ohci_p12 bcm5892_skeypad hx_decoder(P) pinnacle hx_memalloc(P) bcm_udc_dwc scsi_mod g_serial sd_mod usb_storage CPU: 0 Tainted: P (2.6.27.39-WR3.0.2ax_standard #1) PC is at __kmalloc+0x70/0xdc LR is at __kmalloc+0x48/0xdc pc : [c0098cc8] lr : [c0098ca0] psr: 20000093 sp : c7a9fd50 ip : c03a4378 fp : c7a9fd7c r10: bf0708b4 r9 : c7a9e000 r8 : 00000040 r7 : bf06d03c r6 : 00000020 r5 : a0000093 r4 : 0000000d r3 : 00000000 r2 : 00000094 r1 : 00000020 r0 : c03a4378 Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 47bdc008 DAC: 00000015 Process sh (pid: 1088, stack limit = 0xc7a9e260) Stack: (0xc7a9fd50 to 0xc7aa0000) fd40: c7a6a1d0 00000020 c7a9fd7c c7ba8fc0 fd60: 00000040 c7a6a1d0 00000020 c71598c0 c7a9fd9c c7a9fd80 bf06d03c c0098c64 fd80: c71598c0 00000003 c7a6a1d0 bf06c83c c7a9fdbc c7a9fda0 bf06d098 bf06d008 fda0: c7159880 00000000 c7a6a2d8 c7159898 c7a9fde4 c7a9fdc0 bf06d130 bf06d078 fdc0: c79ca000 c7159880 00000000 00000000 c7afbc00 c7a9e000 c7a9fe0c c7a9fde8 fde0: bf06d4b4 bf06d0f0 00000000 c79fd280 00000000 0f700000 c7a9e000 00000241 fe00: c7a9fe3c c7a9fe10 c01c37b4 bf06d300 00000000 c7afbc00 00000000 00000000 fe20: c79cba84 c7463c78 c79fd280 c7473b00 c7a9fe6c c7a9fe40 c00a184c c01c35e4 fe40: 00000000 c7bb0005 c7a9fe64 c79fd280 c7463c78 00000000 c00a1640 c785e380 fe60: c7a9fe94 c7a9fe70 c009c438 c00a164c c79fd280 c7a9fed8 c7a9fed8 00000003 fe80: 00000242 00000000 c7a9feb4 c7a9fe98 c009c614 c009c2a4 00000000 c7a9fed8 fea0: c7a9fed8 00000000 c7a9ff64 c7a9feb8 c00aa6bc c009c5e8 00000242 000001b6 fec0: 000001b6 00000241 00000022 00000000 00000000 c7a9fee0 c785e380 c7473b00 fee0: d8666b0d 00000006 c7bb0005 00000300 00000000 00000000 00000001 40002000 ff00: c7a9ff70 c79b10a0 c79b10a0 00005402 00000003 c78d69c0 ffffff9c 00000242 ff20: 000001b6 c79fd280 c7a9ff64 c7a9ff38 c785e380 c7473b00 00000000 00000241 ff40: 000001b6 ffffff9c 00000003 c7bb0000 c7a9e000 00000000 c7a9ff94 c7a9ff68 ff60: c009c128 c00aa380 4d18b5f0 08000000 00000000 00071214 0007128c 00071214 ff80: 00000005 c0027ee4 c7a9ffa4 c7a9ff98 c009c274 c009c0d8 00000000 c7a9ffa8 ffa0: c0027d40 c009c25c 00071214 0007128c 0007128c 00000241 000001b6 00000000 ffc0: 00071214 0007128c 00071214 00000005 00073580 00000003 000713e0 400010d0 ffe0: 00000001 bef0c7b8 000269cc 4d214fec 60000010 0007128c 00000000 00000000 Backtrace: [] (__kmalloc+0x0/0xdc) from [] (gs_alloc_req+0x40/0x70 [g_serial]) r8:c71598c0 r7:00000020 r6:c7a6a1d0 r5:00000040 r4:c7ba8fc0 [] (gs_alloc_req+0x0/0x70 [g_serial]) from [] (gs_alloc_requests+0x2c/0x78 [g_serial]) r7:bf06c83c r6:c7a6a1d0 r5:00000003 r4:c71598c0 [] (gs_alloc_requests+0x0/0x78 [g_serial]) from [] (gs_start_io+0x4c/0xac [g_serial]) r7:c7159898 r6:c7a6a2d8 r5:00000000 r4:c7159880 [] (gs_start_io+0x0/0xac [g_serial]) from [] (gs_open+0x1c0/0x224 [g_serial]) r9:c7a9e000 r8:c7afbc00 r7:00000000 r6:00000000 r5:c7159880 r4:c79ca000 [] (gs_open+0x0/0x224 [g_serial]) from [] (tty_open+0x1dc/0x314) [] (tty_open+0x0/0x314) from [] (chrdev_open+0x20c/0x22c) [] (chrdev_open+0x0/0x22c) from [] (__dentry_open+0x1a0/0x2b8) r8:c785e380 r7:c00a1640 r6:00000000 r5:c7463c78 r4:c79fd280 [] (__dentry_open+0x0/0x2b8) from [] (nameidata_to_filp+0x38/0x50) [] (nameidata_to_filp+0x0/0x50) from [] (do_filp_open+0x348/0x6f4) r4:00000000 [] (do_filp_open+0x0/0x6f4) from [] (do_sys_open+0x5c/0x170) [] (do_sys_open+0x0/0x170) from [] (sys_open+0x24/0x28) r8:c0027ee4 r7:00000005 r6:00071214 r5:0007128c r4:00071214 [] (sys_open+0x0/0x28) from [] (ret_fast_syscall+0x0/0x2c) Code: e59c4080 e59c8090 e3540000 159c308c (17943103) ---[ end trace be196e7cee3cb1c9 ]--- note: sh[1088] exited with preempt_count 2 process '-/bin/sh' (pid 1088) exited. Scheduling for restart. Welcome to Wind River Linux

    Read the article

  • Remote XP -> Win98 WMI Connection

    - by Logan Young
    I've asked this on Technet, but because Win98 is no longer supported, I can't get any decent information, I was hoping there might be some "old school" developers here who might be able to help me. There is an application that we use a lot at work. This application should run 8am-5pm with as little interruption as possible. Most of the computers where this application runs are using Win98, and we have no way to upgrade them because we can't buy new hardware at the moment. My computer is running WinXP, so I thought of a way to make sure that this application runs all the time: The idea I had was to develop a Windows Service that executes a VBScript file that contains a WMI query to get a list of processes from each computer. Each list is then examined, and, depending on whether or not the target application is running, it will either do nothing, or it will execute another VBScript file that contains a WMI query that will be used to start the target application remotely. I later found a way to do this all with 1 VBScript file (see code below) My problem is in the remote connection to the target computers. I've installed WMI Core 1.5 on them, but every time I try the remote connection, I get the following: The remote server is unavailable or does not exist: 'GetObject' VBScript runtime error 800A01CE I've done some research, and all I've found is info about DCOM Config and Windows Firewall, but Win98 doesn't have either of these. ' #### Variables and constants #### Const HIDDEN_WINDOW = 12 Dim T ' #### End Variables and constants #### Main() Sub Main() ' #### Get Process information from WMI Computer = "." Set WMI = GetObject("winmgmts:" & _ "{ImpersonationLevel=Impersonate}!\\" & Computer & "\root\cimv2") Set Settings = WMI.ExecQuery("SELECT * FROM Win32_Process") For Each Process In Settings ' #### If the application is found to be running, set a value to indicate this If Process.Name = "NOTEPAD.EXE" Then T = True End If Next ' #### T will only have a value if the application is not running. We therefore ' #### evaluate it to determine if it has a value or not. If not, start the application If Not T Then 'MsgBox("Application not found.") Set Startup = WMI.Get("Win32_ProcessStartup") Set Config = Startup.SpawnInstance_ Config.ShowWindow = HIDDEN_WINDOW Set Process = GetObject("winmgmts:root\cimv2:Win32_Process") errReturn = Process.Create(_ "C:\Windows\notepad.exe", null, Config, intProcessID) End If End Sub This uses WMI to get the list of processes from the local computer and, if the target application is running, it'll do nothing, otherwise it'll forcefully start the target application. The problem is that this works only if I specify the local comuter, if I target another computer, I get the error mentioned above. Does anyone have any ideas? Thanks in advance for the help!

    Read the article

  • Search row with highest number of cells in a row with colspan

    - by user593029
    What is the efficient way to search highest number of cells in big table with numerous colspan (merge cells ** colspan should be ignored so in below example highest number of cells is 4 in first row). Is it js/jquery with reg expression or just the loop with bubble sorting. I got one link as below explainig use of regex is it ideal way ... can someone suggest pseudo code for this. High cpu consumption due to a jquery regex patch <table width="156" height="84" border="0" > <tbody> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px" colspan="2"/> </tr> <tr style="height:10px"> <td width="10" style="width:10px; height:10px" colspan="2"/> <td width="10" style="width:10px; height:10px"/> <td width="10" style="width:10px; height:10px"/> </tr> </tbody> </table>

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Swing: does DefaultBoundedRangeModel coalesce multiple events?

    - by Jason S
    I have a JProgressBar displaying a BoundedRangeModel which is extremely fine grained and I was concerned that updating it too often would slow down my computer. So I wrote a quick test program (see below) which has a 10Hz timer but each timer tick makes 10,000 calls to microtick() which in turn increments the BoundedRangeModel. Yet it seems to play nicely with a JProgressBar; my CPU is not working hard to run the program. How does JProgressBar or DefaultBoundedRangeModel do this? They seem to be smart about how much work it does to update the JProgressBar, so that as a user I don't have to worry about updating the BoundedRangeModel's value. package com.example.test.gui; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.BoundedRangeModel; import javax.swing.DefaultBoundedRangeModel; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JProgressBar; import javax.swing.Timer; public class BoundedRangeModelTest1 extends JFrame { final private BoundedRangeModel brm = new DefaultBoundedRangeModel(); final private Timer timer = new Timer(100, new ActionListener() { @Override public void actionPerformed(ActionEvent arg0) { tick(); } }); public BoundedRangeModelTest1(String title) { super(title); JPanel p = new JPanel(); p.add(new JProgressBar(this.brm)); getContentPane().add(p); this.brm.setMaximum(1000000); this.brm.setMinimum(0); this.brm.setValue(0); } protected void tick() { for (int i = 0; i < 10000; ++i) { microtick(); } } private void microtick() { this.brm.setValue(this.brm.getValue()+1); } public void start() { this.timer.start(); } static public void main(String[] args) { BoundedRangeModelTest1 f = new BoundedRangeModelTest1("BoundedRangeModelTest1"); f.pack(); f.setVisible(true); f.setDefaultCloseOperation(EXIT_ON_CLOSE); f.start(); } }

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >