Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 518/595 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • How do I detect if there is already a similar document stored in Lucene index.

    - by Jenea
    Hi. I need to exclude duplicates in my database. The problem is that duplicates are not considered exact match but rather similar documents. For this purpose I decided to use FuzzyQuery like follows: var fuzzyQuery = new global::Lucene.Net.Search.FuzzyQuery( new Term("text", queryText), 0.8f, 0); hits = _searcher.Search(query); The idea was to set the minimal similarity to 0.8 (that I think is high enough) so only similar documents will be found excluding those that are not sufficiently similar. To test this code I decided to see if it finds already existing document. To the variable queryText was assigned a value that is stored in the index. The code from above found nothing, in other words it doesn't detect even exact match. Index was build by this code: doc.Add(new global::Lucene.Net.Documents.Field( "text", text, global::Lucene.Net.Documents.Field.Store.YES, global::Lucene.Net.Documents.Field.Index.TOKENIZED, global::Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS)); I followed recomendations from bellow and the results are: TermQuery doesn't return any result. Query contructed with var _analyzer = new RussianAnalyzer(); var parser = new global::Lucene.Net.QueryParsers .QueryParser("text", _analyzer); var query = parser.Parse(queryText); var _searcher = new IndexSearcher (Settings.General.Default.LuceneIndexDirectoryPath); var hits = _searcher.Search(query); Returns several results with the maximum score the document that has exact match and other several documents that have similar content.

    Read the article

  • External API function calls AS3 control timeline

    - by giles
    I have function problem using this code (below), the embedded flash movieclip disappears or completely prevents the scrollto.js query to function in DW cs3. Communication between Flash and JavaScript is without problems, it is the call back I can't find to work and more frustratingly, should be simple, as no values are not required. So far, this has been hours of scouring the net without a workable end in sight...ahrr. What is a function for this to work? JavaScript – to call Flash event from HTML button link, placed between head tags function callExternalInterface() var flashMovie = window.document.menu; flashMovie.menu_up(value); menu_up is the string. Does anyone know of workable function for callback?? HTML <div id="btn_up"><a href="#top" name="charDev" id="charDev" onclick="">top</a></div> Pane navigation div that uses Scrollto.js query, and it's this link I need calling back to the embedded "menubtns.swf" (nested in "AS3Menu_javascript.swf") to play 5 frames of this movieclip, via a JS function. Embedded .swf code, using swfobject.js with allowScriptAccess=always <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="menu"<br/> width="251" height="251" id="menu"> <param name="movie" value="../~Assets/Flash/AS3Menu_javascript.swf" /> <param name="allowScriptAccess" value="always" /> <param name="movie" value="ExternalInterfaceScript.swf" /> <param name="quality" value="high" /> <object type="application/x-shockwave-flash" data="../~Assets/Flash/AS3Menu_javascript.swf" width="250" height="250"> <p>Alternative content</p> </object> </object> AS3 / Flash import flash.external.ExternalInterface;flash.system.Security.allowDomain(/sourceDomain/); ExternalInterface.addCallBack("menu_up", this, resetmenu); function resetmenu(){ gotoAndPlay:("frame label" / "number") }

    Read the article

  • What are alternatives to Win32 PulseEvent() function?

    - by Bill
    The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can. I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular). Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments. Can anyone please suggest one? Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • Sending Email through Gmail

    - by persistence
    I am writing a program that send an email through GMail but I have serious Operation timeout error. What is the likely cause. class Mailer { MailMessage ms; SmtpClient Sc; public Mailer() { Sc = new SmtpClient("smtp.gmail.com"); //Sc.Credentials = CredentialCache.DefaultNetworkCredentials; Sc.EnableSsl = true; Sc.Port =465; Sc.Timeout = 900000000; Sc.DeliveryMethod = SmtpDeliveryMethod.Network; Sc.UseDefaultCredentials = false; Sc.Credentials = new NetworkCredential("uid", "mypss"); } public void MailTodaysBirthdays(List<Celebrant> TodaysCelebrant) { int i = TodaysCelebrant.Count(); foreach (Celebrant cs in TodaysCelebrant) { //if (IsEmail(cs.EmailAddress.ToString().Trim())) //{ ms = new MailMessage(); ms.To.Add(cs.EmailAddress); ms.From = new MailAddress("uid","Developers",System.Text.Encoding.UTF8); ms.Subject = "Happy Birthday "; String EmailBody = "Happy Birthday " + cs.FirstName; ms.Body = EmailBody; ms.Priority = MailPriority.High; try { Sc.Send(ms); } catch (Exception ex) { Sc.Send(ms); BirthdayServices.LogEvent(ex.Message.ToString(),EventLogEntryType.Error); } //} } } }

    Read the article

  • How Random is System.Guid.NewGuid()? (Take two)

    - by Vilx-
    Before you start marking this as a duplicate, read me out. The other question has a (most likely) incorrect accepted answer. I do not know how .NET generates its GUIDs, probably only Microsoft does, but there's a high chance it simply calls CoCreateGuid(). That function however is documented to be calling UuidCreate(). And the algorithms for creating an UUID are pretty well documented. Long story short, be as it may, it seems that System.Guid.NewGuid() indeed uses version 4 UUID generation algorithm, because all the GUIDs it generates matches the criteria (see for yourself, I tried a couple million GUIDs, they all matched). In other words, these GUIDs are almost random, except for a few known bits. This then again raises the question - how random IS this random? As every good little programmer knows, a pseudo-random number algorithm is only as random as its seed (aka entropy). So what is the seed for UuidCreate()? How ofter is the PRNG re-seeded? Is it cryptographically strong, or can I expect the same GUIDs to start pouring out if two computers accidentally call System.Guid.NewGuid() at the same time? And can the state of the PRNG be guessed if sufficiently many sequentially generated GUIDs are gathered?

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • Need help in support multiple resolution screen on android

    - by michael
    Hi, In my android application, I would like to support multiple screens. So I have my layout xml files in res/layout (the layout are the same across different screen resolution). And I place my high-resolution asserts in res/drawable-hdpi In my layout xml, I have <?xml version="1.0" encoding="utf-8"?> <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/table" android:background="@drawable/bkg"> And I have put bkg.png in res/drawable-hdpi And I have started my emulator with WVGA-800 as avd. But my application crashes: E/AndroidRuntime( 347): Caused by: android.content.res.Resources$NotFoundException: Resource is not a Drawable (color or path): TypedValue{t=0x1/d=0x7f020023 a=-1 r=0x7f020023} E/AndroidRuntime( 347): at android.content.res.Resources.loadDrawable(Resources.java:1677) E/AndroidRuntime( 347): at android.content.res.TypedArray.getDrawable(TypedArray.java:548) E/AndroidRuntime( 347): at android.view.View.<init>(View.java:1850) E/AndroidRuntime( 347): at android.view.View.<init>(View.java:1799) E/AndroidRuntime( 347): at android.view.ViewGroup.<init>(ViewGroup.java:284) E/AndroidRuntime( 347): at android.widget.LinearLayout.<init>(LinearLayout.java:92) E/AndroidRuntime( 347): ... 42 more Does anyone know how to fix my problem? Thank you.

    Read the article

  • How can I coordinate code review tool and RCS (specifically git)

    - by Chris Nelson
    We're committed to git for code management. We're trying to find a tool that will help us systematize code reviews. We're considering Gerrit and Code Collaborator but would welcome other suggestions. We're having a problem answering the question, "How do we know every commit was reviewed?" (Or "What commits have yet to be reviewed?") One answer would be to submit every commit or every push for review and track incomplete reviews in the review tool. I'm not entirely happy with relying on a another tool -- especially if it's not open source -- to tell us this. What seems to be a better answer is to rely on sign offs in git (e.g., "Signed-off-by: Chris Nelson") and use a hook in the review tool to sign off commits on behalf of the reviewer. And advantage of this is if we use some other review mechanism for some commits, we have just one place to look for results. One problem with this is that we can't require review before push because the review tool is unlikely to have access to the developer's private repository clone to add the sign-off. Any ideas on integrating code review with code management to achieve ease of use and high visibility of unreviewed changes?

    Read the article

  • Mac OS X: Getting detailed process information (specifically its launch arguments) for arbitrary run

    - by Jasarien
    I am trying to detect when particular applications are launched. Currently I am using NSWorkspace, registering for the "did launch application" notification. I also use the runningApplications method to get apps that are currently running when my app starts. For most apps, the name of the app bundle is enough. I have a plist of "known apps" that I cross check with the name of that passed in the notification. This works fine until you come across an app that acts as a proxy for launching another application using command line arguments. Example: The newly released Portal on the Mac doesn't have a dedicated app bundle. Steam can create a shortcut, which serves as nothing more than to launch the hl2_osx app with the -game argument and portal as it's parameter. Since more Source based games are heading to the Mac, I imagine they'll use the same method to launch, effectively running the hl2_osx app with the -game argument. Is there a nice way to get a list of the arguments (and their parameters) using a Cocoa API? NSProcessInfo comes close, offering an `-arguments' method, but only provides information for its own process... NSRunningApplication offers the ability to get information about arbitrary apps using a PID, but no command line args... Is there anything that fills the gap between the two? I'm trying not to go down the route of spawning an NSTask to run ps -p [pid] and parsing the output... I'd prefer something more high level.

    Read the article

  • How to organize the work when project needs to be re-implemented due to poor code quality?

    - by Dmitriy Nagirnyak
    Hi, I have joined a very small where one main developer has been buiding the web app (.NET 4.0) during ~6 months. The project should be delivered within next 2 months. After first look at the code I can say that I would never allow it to go to production (things like catch { }, not tests at all with WebForms etc). So the code quality is incredibly low. My task is to improve that and still deliver the solution. So I plan to start with unit testing and MVC2 reimplementing most of the functionality (though using some of the existing code). I estimate that I will need about 6 weeks to catch up with the current progress and be on te same functionality level as the application will be in 6 months. The problem is that the main developer who has been working on the project does not seem to be very 'professional' and skillful (he seems to be really starting in IT and many basic things are unknown to him). It will take significant amount of time and effort to educate him how to do the proper testing, development and apply some patterns. I am ready to take responsibility for the reimplemnting the application but at the same time I don't want the main developer to be on idle but as he won't be able to significantly contribute to the better-world project at this stage I am not sure what would the best way to keep productivity high for both of us. Currently I think following solution is good enough: He proceeds doing what he does until I will catch up with him and then start working on a new project together. The problem is that of course this approach is not very productive as one developer will do better-world project while the other will proceed with what he did, effectively doing similar tasks. Can you suggest how we could better organise the work together in order to be most efficient for the overall project? Thanks, Dmitriy.

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Should programmers do Pro Bono work? where are the code public defenders?

    - by Tj Kellie
    How many projects are people doing based on the Bro Bono publico ideals versus working for the highest wage or potential for a cash-in-buy-out payday? For years lawyers have been called out for excessive gathering of wealth from high bill rates and huge settlement deals, hiring out their knowledge and skills to the highest bidders. People call for them to do more for free, use the laws and their time to defend or further some cause thats in the public's best interest. Is professional software development that different? So many bright people and so much knowledge of complex systems. Do you think that there is enough of a "Pro Bono" movement to solve the social and public problems in the industry right now? If so what are the examples to point to? OLPC? NOTE: Saying that open source software is the same as pro bono misses the point completely. I was looking for specific projects with a social context, not just group-sourcing for free software. Just because your not making anyone pay for your software does not mean its doing anyone any good. I'm not calling out manual enforcement of pro bono work for programmers, really just want some objective opinions and concrete examples of social-minded software/tech development projects like the One Laptop Per Child project. I'm sure open source would be a natural tie-in for some.

    Read the article

  • Zend php memory memory_limit

    - by RepDetec
    All, I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server: Allowed memory size of XXXX bytes exhausted (tried YYYY... We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM. Thanks for all of the help! Update I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example: (tried to allocate 13344 bytes) If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally. Any more ideas? I have a ticket out to get Xdebug installed on the Linux box. Thanks, -rep

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • Getting SSL to work with Apache/Passenger on OSX

    - by jonnii
    I use apache/passenger on my development machine, but need to add SSL support (something which isn't exposed through the control panel). I've done this before in production, but for some reason I can't seem to get it work on OSX. The steps I've followed so far are from a default apache osx install: Install passenger and passenger preference pane. Add my rails app (this works) Create my ca.key, server.crt and server.key as detailed on the apple website. At this point I need to start editing the apache configs, so I added: # Apache knows to listen on port 443 for ssl requests. Listen 443 Listen 80 I thought I'd try editing the passenger pref pane generated config first to get everything working, when I add: It starts off looking like this <VirtualHost *:80> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <Directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </Directory> </VirtualHost> I then append this: <VirtualHost *:443> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </directory> # SSL Configuration SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP SSLOptions +FakeBasicAuth +ExportCertData +StdEnvVars +StrictRequire #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl.key/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl.key/server.key SSLCertificateChainFile /private/etc/apache2/ssl.key/ca.crt SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 </VirtualHost> The files referenced all exist (I doubled checked that), but now when I restart my apache I can't even get to myapp.local. However apache can still server the default page when I click on it in the sharing preference panel. Any help would be greatly appreciated.

    Read the article

  • How to properly combine two files in XAML in Microsoft Blend?

    - by MartyIX
    Hello, I have a test project with the file MainWindow.xaml with the content: <Window x:Class="MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" xmlns:view="clr-namespace:Sokoban.View;assembly=Solvers" Title="Window1" Height="300" Width="300" Loaded="Window_Loaded"> <ad:DockingManager x:Name="dockingManager"> <ad:ResizingPanel Orientation="Vertical"> <view:Solvers x:Name="solvers" diag:PresentationTraceSources.TraceLevel="High" /> <!-- LINE BELOW DEMONSTRATES WORKING CODE INSTEAD OF LINE ABOVE --> <!--<ad:DocumentPane Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <ad:DockableContent x:Name="classesContent" Title="Classes"> <TextBlock>test</TextBlock> </ad:DockableContent> </ad:DocumentPane>--> </ad:ResizingPanel> </ad:DockingManager> </Window> and in another project I have the file Solvers.xaml: <ad:DocumentPane x:Class="Sokoban.View.Solvers" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:ad="clr-namespace:AvalonDock;assembly=AvalonDock" xmlns:diag="clr-namespace:System.Diagnostics;assembly=WindowsBase" Name="GamesDocumentPane" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> </ad:DocumentPane> When I open my Visual Studio solution in Microsoft Blend 4 then I see the error: InvalidOperationException: DocumentPane must be put under a DockingManager! when I open either MainWindow.xaml or Solvers.xaml. It is all right in Solvers.xaml because there really is no DockingManager but MainWindow.xaml should work, shouldn't it? How to solve the problem? Note: It seems to me that the files are processed separately and because the file Solvers.xaml contains the error the MainWindow.xaml file also contains the very same error. Note 2: XAML files use AvalonDock library Is there a way how to say that Solvers.xaml is only an extension of another file? Thank you for any help!

    Read the article

  • https not redirecting to mongrel upstream

    - by kip
    Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page". http { # passenger_root /opt/passenger-2.2.11; # passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; upstream mongrel { server 00.000.000.000:8000; server 00.000.000.000:8001; } server { listen 443; server_name domain.com; ssl on; ssl_certificate /etc/ssl/localcerts/domain_combined.crt; ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; location / { root /current/public/; index index.html index.htm; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mongrel; } } }

    Read the article

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >