Search Results

Search found 7277 results on 292 pages for 'operating room analytics'.

Page 264/292 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • How can I handle unread push notifications in iOS?

    - by Bartserk
    I have a iOS 5.1 application that registers to the APNS service to receive notifications. The register is successful and I receive the notifications correctly. The problem comes when I try to handle the notifications. Once the application is running, the method didReceiveRemoteNotification in the AppDelegate is called correctly and so the notification is handled as intended. This, however, only happens when the application is running on the foreground. However, when the application is running on the background or is simply stopped, that method is not called. I've read that you should add some lines to the method didFinishLaunchingWithOptions method to obtain the notification from the userInfo dictionary, and handle it. This works just fine, but ONLY when the application is opened by clicking on the notification at the Notification Center. This means that if you open the application by clicking on its badge, or simply by changing context if you were running it on the background, the app never realises that a notification came in. Additionally, if more than one notification was received, we can only handle one of them at once by clicking on the Notification Center, which is a pain :-) Is there any way to read the pending notifications in the Notification Center? I know there is a way to flush them using the method cancelAllLocalNotifications but I haven't found a way to just read them. And I really need to handle all of them. I thought of implementing a communication protocol with the third-party notification server to retrieve the information again when the application comes to the foreground, but since the information is already in the operating system I would find it strange if it's impossible to access it somehow. So, does anybody know a way to do it? Thanks in advance.

    Read the article

  • Why does this tooltip appear *below* a transclucent form?

    - by Daniel Stutzbach
    I have an form with an Opacity less then 1.0. I have a tooltip associated with a label on the form. When I hover the mouse over the label, the tooltip shows up under the form instead of over the form. If I leave the Opacity at its default value of 1.0, the tooltip correctly appears over the form. However, my form is obviously no longer translucent. ;-) I'm testing on an XP system with .NET 3.5. If you don't see this problem on your system, let me know what operating system and version of .NET you have. I have tried manually adjusting the position of the ToolTip with SetWindowPos() and creating a ToolTip "by hand" using CreateWindowEx(), but the problem remains. This makes me suspect its a Win32 API problem, not a problem with the Windows Forms implementation that runs on top of Win32. Why does the tooltip appear under the form, and, more importantly, how can I get it to appear over the form where it should? Here is a minimal program to demonstrate the problem: using System; using System.Windows.Forms; public class Form1 : Form { private ToolTip toolTip1; private Label label1; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public Form1() { toolTip1 = new ToolTip(); label1 = new Label(); label1.Location = new System.Drawing.Point(105, 127); label1.Text = "Hover over me"; label1.AutoSize = true; toolTip1.SetToolTip(label1, "This is a moderately long string, " + "designed to be very long so that it will also be quite long."); ClientSize = new System.Drawing.Size(292, 268); Controls.Add(label1); Opacity = 0.8; } }

    Read the article

  • Show/hide text based on optgroup selection using Jquery

    - by general exception
    I have the following HTML markup:- <select name="Fault" class="textbox" id="fault"> <option>Single Light Out</option> <option>Light Dim</option> <option>Light On In Daytime</option> <option>Erratic Operating Times</option> <option>Flashing/Flickering</option> <option>Causing Tv/Radio Interference</option> <option>Obscured By Hedge/Tree Branches</option> <option>Bracket Arm Needs Realigning</option> <option>Shade/Cover Missing</option> <option>Column In Poor Condition</option> <option>Several Lights Out (please state how many)</option> <option>Column Leaning</option> <option>Door Missing/Wires Exposed</option> <option>Column Knocked Down/Traffic Accident</option> <option>Lantern Or Bracket Broken Off/Hanging On Wires</option> <option>Shade/Cover Hanging Open</option> </select> <span id="faulttext" style="color:Red; display:none">Text in the span</span> This Jquery snippet adds the last 5 options into an option group. $('#fault option:nth-child(n+12)').wrapAll('<optgroup label="Urgent Reasons">'); What I want to do is, remove the display:none if any of the items within the <optgroup> are selected, effectively displaying the span message, possibly with a fade in transition, and also hide the message if any options outside of the <optgroup> are selected.

    Read the article

  • How big can I make an Android application's canvas in terms of pixels?

    - by user279112
    I've determined an estimate of the size of my Android emulator's screen in pixels, although I think its resolution can be changed to other numbers. Quite frankly though that doesn't eliminate the general problem of not knowing how many pixels on each axis I have to work with on my Android applications in general. The main problem I'm trying to solve is this: How do I make sure I don't use a faulty resolution on Android applications if I want to keep things' sizes constant (so that if the application screen shrinks, for instances, objects will still show up just as big - there just won't be as many of them being shown) if I wish to do this with a single universal resolution for each program? Failing that, how do I make sure everything's alright if I try to do everything the same way with maybe a few different pre-set resolutions? Mainly it seems like a relevant question that must be answered before I can come across a complete answer for the general problem is how big can I always make my application in pixels, NOT regarding if and when a user resizes the application's screen to something smaller than the maximum size permitted by the phone and its operating system. I really want to try to keep this simple. If I were doing this for a modern desktop, for instance, I know that if I design the application with a 800x600 canvas, the user can still shrink the application to the point they're not doing themselves any favors, but at least I can basically count on it working right and not being too big for the monitor or something. Is there such a magic resolution for Android, assuming that I'm designing for API levels 3+ (Android 1.5+)? Thanks

    Read the article

  • sql server mdf file database attachment

    - by jnsohnumr
    hello all i'm having a bear of a time getting visual studio 2010 (ultimate i think) to properly attach to my database. it was moved from original spot to #MYAPP#/#MYAPP#.Web/App_Data/#MDF_FILE#.mdf. I have three instances of sql server running on this machine. i have tried to replace the old mdf file with my new one and cannot get the connectionstring right for it. what i'm really wanting to do is to just open some DB instance, run a DB create script. Then I can have a DB that was generated via my edmx (generate database from model) in silverlight business application (c#) right now, when i go to server explorer in VS, choose add new connection, choose MS SQL Server Database FIle (SqlClient), choose my file location (app_data directory), use windows authentication, and hit the Test Connection button I get the following error: Unable to open the physical file "". Operating system error 5: "5(Access Denied.)". An attempt to attach to an auto-named database for file"" failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. The mdf file was created on the same machine by connecting to (local) on the sql server management studio, getting a new query, pasting in the SQL from the generated ddl file, adding a CREATE DATABASE [NcrCarDatabase]; GO before the pasted SQL, and executing the query. I then disconnected from the DB in management studio, closed management studio, navigated to the DATA directory for that instance, and copying the mdf and ldf files to my application's app_data folder. I am then trying to connect to the same file inside visual studio. I hope that gives more clarity to my problems :). Connection string is: Data Source=.\SQLEXPRESS;AttachDbFilename=C:\SourceCode\NcrCarDatabase\NcrCarDatabase.Web\App_Data\NcrCarDatabase.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • On Disk Substring index

    - by emeryc
    I have a file (fasta file to be specific) that I would like to index, so that I can quickly locate any substring within the file and then find the location within the original fasta file. This would be easy to do in many cases, using a Trie or substring array, unfortunately the strings I need to index are 800+ MBs which means that doing them in memory in unacceptable, so I'm looking for a reasonable way to create this index on disk, with minimal memory usage. (edit for clarification) I am only interested in the headers of proteins, so for the largest database I'm interested in, this is about 800 MBs of text. I would like to be able to find an exact substring within O(N) time based on the input string. This must be useable on 32 bit machines as it will be shipped to random people, who are not expected to have 64 bit machines. I want to be able to index against any word break within a line, to the end of the line (though lines can be several MBs long). Hopefully this clarifies what is needed and why the current solutions given are not illuminating. I should also add that this needs to be done from within java, and must be done on client computers on various operating systems, so I can't use any OS Specific solution, and it must be a programatic solution.

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • How to collect and inject all beans of a given type in Spring XML configuration

    - by GrzegorzOledzki
    One of the strongest accents of the Spring framework is the Dependency Injection concept. I understand one of the advices behind that is to separate general high-level mechanism from low-level details (as announced by Dependency Inversion Principle). Technically, that boils down to having a bean implementation to know as little as possible about a bean being injected as a dependency, e.g. public class PrintOutBean { private LogicBean logicBean; public void action() { System.out.println(logicBean.humanReadableDetails()); } //... } <bean class="PrintOutBean"> <property name="loginBean" ref="ShoppingCartBean"/> </bean> But what if I wanted to a have a high-level mechanism operating on multiple dependent beans? public class MenuManagementBean { private Collection<Option> options; public void printOut() { for (Option option:options) { // do something for option } //... } } I know one solution would be to use @Autowired annotation in the singleton bean, that is... @Autowired private Collection<Option> options; But doesn't it violate the separation principle? Why do I have to specify what dependents to take in the very same place I use them (i.e. MenuManagementBean class in my example)? Is there a way to inject collections of beans in the XML configuration like this (without any annotation in the MMB class)? <bean class="MenuManagementBean"> <property name="options"> <xxx:autowire by-type="MyOptionImpl"/> </property> </bean>

    Read the article

  • System Calls in windows & Native API?

    - by claws
    Recently I've been using lot of Assembly language in *NIX operating systems. I was wondering about the windows domain. Calling convention in linux: mov $SYS_Call_NUM, %eax mov $param1 , %ebx mov $param2 , %ecx int $0x80 Thats it. That is how we should make a system call in linux. Reference of all system calls in linux: Regarding which $SYS_Call_NUM & which parameters we can use this reference : http://docs.cs.up.ac.za/programming/asm/derick_tut/syscalls.html OFFICIAL Reference : http://kernel.org/doc/man-pages/online/dir_section_2.html Calling convention in Windows: ??? Reference of all system calls in Windows: ??? Unofficial : http://www.metasploit.com/users/opcode/syscalls.html , but how do I use these in assembly unless I know the calling convention. OFFICIAL : ??? If you say, they didn't documented it. Then how is one going to write libc for windows without knowing system calls? How is one gonna do Windows Assembly programming? Atleast in the driver programming one needs to know these. right? Now, whats up with the so called Native API? Is Native API & System calls for windows both are different terms referring to same thing? In order to confirm I compared these from two UNOFFICIAL Sources System Calls: http://www.metasploit.com/users/opcode/syscalls.html Native API: http://undocumented.ntinternals.net/aindex.html My observations: All system calls are beginning with letters Nt where as Native API is consisting of lot of functions which are not beginning with letters Nt. System Call of windows are subset of Native API. System calls are just part of Native API. Can any one confirm this and explain.

    Read the article

  • How long is the time frame between context switches on Windows?

    - by mattcodes
    Reading CLR via C# 2.0 (I dont have 3.0 with me at the moment) Is this still the case: If there is only one CPU in a computer, only one thread can run at any one time. Windows has to keep track of the thread objects, and every so often, Windows has to decide which thread to schedule next to go to the CPU. This is additional code that has to execute once every 20 milliseconds or so. When Windows makes a CPU stop executing one thread's code and start executing another thread's code, we call this a context switch. A context switch is fairly expensive because the operating system has to: So circa CLR via C# 2.0 lets say we are on Pentium 4 2.4ghz 1 core non-HT, XP. Every 20 milliseconds? Where a CLR thread or Java thread is mapped to an OS thread only a maximum of 50 threads per second may get a chance to to run? I've read that context switching is very fast in mircoseconds here on SO, but how often roughly (magnitude style guesses) will say a modest 5 year old server Windows 2003 Pentium Xeon single core give the OS the opportunity to context switch? 20ms in the right area? I dont need exact figures I just want to be sure that's in the right area, seems rather long to me.

    Read the article

  • How to write a "thread safe" function in C ?

    - by Andrei Ciobanu
    Hello I am writing some data structures in C, and I've realized that their associated functions aren't thread safe. The i am writing code uses only standard C, and I want to achieve some sort of 'synchronization'. I was thinking to do something like this: enum sync_e { TRUE, FALSE }; typedef enum sync_e sync; struct list_s { //Other stuff struct list_node_s *head; struct list_node_s *tail; enum sync_e locked; }; typedef struct list_s list; , to include a "boolean" field in the list structure that indicates the structures state: locked, unlocked. For example an insertion function will be rewritten this way: int list_insert_next(list* l, list_node *e, int x){ while(l->locked == TRUE){ /* Wait */ } l->locked = TRUE; /* Insert element */ /* -------------- */ l->locked = FALSE; return (0); } While operating on the list the 'locked' field will be set to TRUE, not allowing any other alterations. After operation completes the 'locked' field will be again set to 'TRUE'. Is this approach good ? Do you know other approaches (using only standard C).

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • How to refactor this duplicated LINQ code?

    - by benrick
    I am trying to figure out how to refactor this LINQ code nicely. This code and other similar code repeats within the same file as well as in other files. Sometime the data being manipulated is identical and sometimes the data changes and the logic remains the same. Here is an example of duplicated logic operating on different fields of different objects. public IEnumerable<FooDataItem> GetDataItemsByColor(IEnumerable<BarDto> dtos) { double totalNumber = dtos.Where(x => x.Color != null).Sum(p => p.Number); return from stat in dtos where stat.Color != null group stat by stat.Color into gr orderby gr.Sum(p => p.Number) descending select new FooDataItem { Color = gr.Key, NumberTotal = gr.Sum(p => p.Number), NumberPercentage = gr.Sum(p => p.Number) / totalNumber }; } public IEnumerable<FooDataItem> GetDataItemsByName(IEnumerable<BarDto> dtos) { double totalData = dtos.Where(x => x.Name != null).Sum(v => v.Data); return from stat in dtos where stat.Name != null group stat by stat.Name into gr orderby gr.Sum(v => v.Data) descending select new FooDataItem { Name = gr.Key, DataTotal = gr.Sum(v => v.Data), DataPercentage = gr.Sum(v => v.Data) / totalData }; } Anyone have a good way of refactoring this?

    Read the article

  • Continuous build infrastructure recommendations for primarily C++; GreenHills Integrity

    - by andersoj
    I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics: ClearCase revision control Approx 80% C++; 15% Java; 5% script or low-level Compiles for Green Hills Integrity OS, but also some windows and JVM chunks Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...) Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems) Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..) Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.) Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related. Cost is not an object. Scalability and ease of retrofitting onto an existing infrastructure are key. JA

    Read the article

  • Why strings behave like ValueType

    - by AJP
    I was perplexed after executing this piece of code, where strings seems to behave as if they are value types. I am wondering whether the assignment operator is operating on values like equality operator for strings. Here is the piece of code I did to test this behavior. using System; namespace RefTypeDelimma { class Program { static void Main(string[] args) { string a1, a2; a1 = "ABC"; a2 = a1; //This should assign a1 reference to a2 a2 = "XYZ"; //I expect this should change the a1 value to "XYZ" Console.WriteLine("a1:" + a1 + ", a2:" + a2);//Outputs a1:ABC, a2:XYZ //Expected: a1:XYZ, a2:XYZ (as string being a ref type) Proc(a2); //Altering values of ref types inside a procedure //should reflect in the variable thats being passed into Console.WriteLine("a1: " + a1 + ", a2: " + a2); //Outputs a1:ABC, a2:XYZ //Expected: a1:NEW_VAL, a2:NEW_VAL (as string being a ref type) } static void Proc(string Val) { Val = "NEW_VAL"; } } } In the above code if I use a custom classes instead of strings, I am getting the expected behavior. I doubt is this something to do with the string immutability? welcoming expert views on this.

    Read the article

  • Cannot change font size /type in plots

    - by Sameet Nabar
    I recently had to re-install my operating system (Ubuntu). The only thing I did differently is that I installed Matlab on a separate partition, not the main Ubuntu partition. After re-installing, the fonts in my plots are no longer configurable. For example, if I ask the title font to be bold, it doesn't happen. I ran the sample code below on my computer and then on my colleague's computer and the 2 results are attached. This cannot be a problem with the code; rather in the settings of Matlab. Could somebody please tell me what settings I need to change? Thanks in advance for your help. Regards, Sameet. x1=-pi:.1:pi; x2=-pi:pi/10:pi; y1=sin(x1); y2=tan(sin(x2)) - sin(tan(x2)); [AX,H1,H2]=plotyy(x1,y1,x2,y2); xlabel ('Time (hh:mm)'); ylabel (AX(1), 'Plot1'); ylabel (AX(2), 'Plot2'); axes(AX(2)) set(H1,'linestyle','none','marker','.'); set(H2,'linestyle','none','marker','.'); title('Plot Title','FontWeight','bold'); set(gcf, 'Visible', 'off'); [legh, objh] = legend([H1 H2],'Plot1', 'Plot2','location','Best'); set(legend,'FontSize',8); print -dpng Trial.png; Bad image: http://imageshack.us/photo/my-images/708/trial1u.png/ Good image: http://imageshack.us/photo/my-images/87/trial2.png/

    Read the article

  • The most expressive web app programming language/framework combination?

    - by Thor
    When concerned about creating web applications, I often ask myself how I can make the code easy to read and above all; how to make it easy to maintain. There has been alot of inventions in the last couple of years with probably millions of programmers sharing these thoughts. So, lets test if we can squeeze the distilled knowledge of millions of StackOverflow users for this ultimate answer: Which language/framework combination in the world right now is the most expressive to do common tasks? Please provide a simple example of simplicity, add a link to more information about the language, and no more than one entry per language/framework combination. Specifications: "Web application" in this context refers to applications that runs on a server and outputs HTML/Javascript/CSS for rendering on a client browser. Any server operating system is ok. "Language/Framework combination" can for example be like Java+Struts or Java+SpringWeb or Perl+CGI or Java+ZK "Most expressive" in this context is meant to be minimal code to do common tasks. "Common tasks" include simple output/input, i.e. form specifying, displaying and processing, as well as simply styling of output. I am more concerned about minimality than about complete functionality. A decent language design can have great potential even though it is not complete.

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Listening UDP or switch to TCP in a MFC application

    - by Alexander.S
    I'm editing a legacy MFC application, and I have to add some basic network functionalities. The operating side has to receive a simple instruction (numbers 1,2,3,4...) and do something based on that. The clients wants the latency to be as fast as possible, so naturally I decided to use datagrams (UDP). But reading all sorts of resources left me bugged. I cannot listen to UDP sockets (CAsyncSocket) in MFC, it's only possible to call Receive which blocks and waits. Blocking the UI isn't really a smart. So I guess I could use some threading technique, but since I'm not all that experienced with MFC how should that be implemented? The other part of the question is should I do this, or revert to TCP, considering reliability and implementation issues. I know that UDP is unreliable, but just how unreliable is it really? I read that it is up to 50% faster, which is a lot for me. References I used: http://msdn.microsoft.com/en-us/library/09dd1ycd(v=vs.80).aspx

    Read the article

  • Parsing localized date strings in PHP

    - by Mikeage
    Hi, I have some code (it's part of a wordpress plugin) which takes a text string, and the format specifier given to date(), and attempts to parse it into an array containing hour, minute, second, day, month, year. Currently, I use the following code (note that strtotime is horribly unreliable with things like 01/02/03) // $format contains the string originally given to date(), and $content is the rendered string if (function_exists('date_parse_from_format')) { $content_parsed = date_parse_from_format($format, $content); } else { $content = preg_replace("([0-9]st|nd|rd|th)","\\1",$content); $content_parsed = strptime($content, dateFormatToStrftime($format)); $content_parsed['hour']=$content_parsed['tm_hour']; $content_parsed['minute']=$content_parsed['tm_min']; $content_parsed['day']=$content_parsed['tm_mday']; $content_parsed['month']=$content_parsed['tm_mon'] + 1; $content_parsed['year']=$content_parsed['tm_year'] + 1900; } This actually works fairly well, and seems to handle every combination I've thrown at it. However, recently someone gave me 24 ??????, 2010. This is Russian for November 24, 2010 [the date format was j F, Y], and it is parsed as year = 2010, month = null, day = 24. Are there any functions that I can use that know how to translate both November and ?????? into 11? EDIT: Running print_r(setlocale(LC_ALL, 0)); returns C. Switching back to strptime() seems to fix the problem, but the docs warn: Internally, this function calls the strptime() function provided by the system's C library. This function can exhibit noticeably different behaviour across different operating systems. The use of date_parse_from_format(), which does not suffer from these issues, is recommended on PHP 5.3.0 and later. Is date_parse_from_format() the correct API, and if so, how do I get it to recognize the language?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >