Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 518/595 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Sending Email through Gmail

    - by persistence
    I am writing a program that send an email through GMail but I have serious Operation timeout error. What is the likely cause. class Mailer { MailMessage ms; SmtpClient Sc; public Mailer() { Sc = new SmtpClient("smtp.gmail.com"); //Sc.Credentials = CredentialCache.DefaultNetworkCredentials; Sc.EnableSsl = true; Sc.Port =465; Sc.Timeout = 900000000; Sc.DeliveryMethod = SmtpDeliveryMethod.Network; Sc.UseDefaultCredentials = false; Sc.Credentials = new NetworkCredential("uid", "mypss"); } public void MailTodaysBirthdays(List<Celebrant> TodaysCelebrant) { int i = TodaysCelebrant.Count(); foreach (Celebrant cs in TodaysCelebrant) { //if (IsEmail(cs.EmailAddress.ToString().Trim())) //{ ms = new MailMessage(); ms.To.Add(cs.EmailAddress); ms.From = new MailAddress("uid","Developers",System.Text.Encoding.UTF8); ms.Subject = "Happy Birthday "; String EmailBody = "Happy Birthday " + cs.FirstName; ms.Body = EmailBody; ms.Priority = MailPriority.High; try { Sc.Send(ms); } catch (Exception ex) { Sc.Send(ms); BirthdayServices.LogEvent(ex.Message.ToString(),EventLogEntryType.Error); } //} } } }

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • How Random is System.Guid.NewGuid()? (Take two)

    - by Vilx-
    Before you start marking this as a duplicate, read me out. The other question has a (most likely) incorrect accepted answer. I do not know how .NET generates its GUIDs, probably only Microsoft does, but there's a high chance it simply calls CoCreateGuid(). That function however is documented to be calling UuidCreate(). And the algorithms for creating an UUID are pretty well documented. Long story short, be as it may, it seems that System.Guid.NewGuid() indeed uses version 4 UUID generation algorithm, because all the GUIDs it generates matches the criteria (see for yourself, I tried a couple million GUIDs, they all matched). In other words, these GUIDs are almost random, except for a few known bits. This then again raises the question - how random IS this random? As every good little programmer knows, a pseudo-random number algorithm is only as random as its seed (aka entropy). So what is the seed for UuidCreate()? How ofter is the PRNG re-seeded? Is it cryptographically strong, or can I expect the same GUIDs to start pouring out if two computers accidentally call System.Guid.NewGuid() at the same time? And can the state of the PRNG be guessed if sufficiently many sequentially generated GUIDs are gathered?

    Read the article

  • External API function calls AS3 control timeline

    - by giles
    I have function problem using this code (below), the embedded flash movieclip disappears or completely prevents the scrollto.js query to function in DW cs3. Communication between Flash and JavaScript is without problems, it is the call back I can't find to work and more frustratingly, should be simple, as no values are not required. So far, this has been hours of scouring the net without a workable end in sight...ahrr. What is a function for this to work? JavaScript – to call Flash event from HTML button link, placed between head tags function callExternalInterface() var flashMovie = window.document.menu; flashMovie.menu_up(value); menu_up is the string. Does anyone know of workable function for callback?? HTML <div id="btn_up"><a href="#top" name="charDev" id="charDev" onclick="">top</a></div> Pane navigation div that uses Scrollto.js query, and it's this link I need calling back to the embedded "menubtns.swf" (nested in "AS3Menu_javascript.swf") to play 5 frames of this movieclip, via a JS function. Embedded .swf code, using swfobject.js with allowScriptAccess=always <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="menu"<br/> width="251" height="251" id="menu"> <param name="movie" value="../~Assets/Flash/AS3Menu_javascript.swf" /> <param name="allowScriptAccess" value="always" /> <param name="movie" value="ExternalInterfaceScript.swf" /> <param name="quality" value="high" /> <object type="application/x-shockwave-flash" data="../~Assets/Flash/AS3Menu_javascript.swf" width="250" height="250"> <p>Alternative content</p> </object> </object> AS3 / Flash import flash.external.ExternalInterface;flash.system.Security.allowDomain(/sourceDomain/); ExternalInterface.addCallBack("menu_up", this, resetmenu); function resetmenu(){ gotoAndPlay:("frame label" / "number") }

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • Need help in support multiple resolution screen on android

    - by michael
    Hi, In my android application, I would like to support multiple screens. So I have my layout xml files in res/layout (the layout are the same across different screen resolution). And I place my high-resolution asserts in res/drawable-hdpi In my layout xml, I have <?xml version="1.0" encoding="utf-8"?> <TableLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/table" android:background="@drawable/bkg"> And I have put bkg.png in res/drawable-hdpi And I have started my emulator with WVGA-800 as avd. But my application crashes: E/AndroidRuntime( 347): Caused by: android.content.res.Resources$NotFoundException: Resource is not a Drawable (color or path): TypedValue{t=0x1/d=0x7f020023 a=-1 r=0x7f020023} E/AndroidRuntime( 347): at android.content.res.Resources.loadDrawable(Resources.java:1677) E/AndroidRuntime( 347): at android.content.res.TypedArray.getDrawable(TypedArray.java:548) E/AndroidRuntime( 347): at android.view.View.<init>(View.java:1850) E/AndroidRuntime( 347): at android.view.View.<init>(View.java:1799) E/AndroidRuntime( 347): at android.view.ViewGroup.<init>(ViewGroup.java:284) E/AndroidRuntime( 347): at android.widget.LinearLayout.<init>(LinearLayout.java:92) E/AndroidRuntime( 347): ... 42 more Does anyone know how to fix my problem? Thank you.

    Read the article

  • Zend php memory memory_limit

    - by RepDetec
    All, I am working on a Zend Framework based web application. We keep encountering out of memory errors on our dev server: Allowed memory size of XXXX bytes exhausted (tried YYYY... We keep increasing memory_limit in php.ini, but it is now up over 1000 megs. What is a normal memory_limit value? What are the usual suspects in php/Zend for running out of memory? We are using the Propel ORM. Thanks for all of the help! Update I cannot reproduce this error in my windows environment. If I set memory_limit low (say 16M), I get the same error, but the "tried to allocate" amount is always something reasonable. For example: (tried to allocate 13344 bytes) If I set the memory very low on the (Fedora 9) server (such as 16M), I get the same thing. consistent, reasonable out of memory errors. However, even when the memory limit is set very high on our server (128M, for example), maybe once a week, I will get an crazy huge memory error: (tried to allocate 1846026201 bytes). I don't know if that might shed any more light onto what is going on. We are using propel 1.5. It sounds like the actual release is going to come out later this month, but it doesn't look like anyone else is having this problem with it anyway. I don't know that Propel is the problem. We are using Zend Server with php 5.2 on the Linux box, and 5.3 locally. Any more ideas? I have a ticket out to get Xdebug installed on the Linux box. Thanks, -rep

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • Passing string with (accidental) escape character loses character even though it's a raw string

    - by Steen
    I have a function with a python doctest that fails because one of the test input strings has a backslash that's treated like an escape character even though I've encoded the string as a raw string. My doctest looks like this: >>> infile = [ "Todo: fix me", "/** todo: fix", "* me", "*/", r"""//\todo stuff to fix""", "TODO fix me too", "toDo bug 4663" ] >>> find_todos( infile ) ['fix me', 'fix', 'stuff to fix', 'fix me too', 'bug 4663'] And the function, which is intended to extract the todo texts from a single line following some variation over a todo specification, looks like this: todos = list() for line in infile: print line if todo_match_obj.search( line ): todos.append( todo_match_obj.search( line ).group( 'todo' ) ) And the regular expression called todo_match_obj is: r"""(?:/{0,2}\**\s?todo):?\s*(?P<todo>.+)""" A quick conversation with my ipython shell gives me: In [35]: print "//\todo" // odo In [36]: print r"""//\todo""" //\todo And, just in case the doctest implementation uses stdout (I haven't checked, sorry): In [37]: sys.stdout.write( r"""//\todo""" ) //\todo My regex-foo is not high by any standards, and I realize that I could be missing something here. EDIT: Following Alex Martellis answer, I would like suggestions on what regular expression would actually match the blasted r"""//\todo fix me""". I know that I did not originally ask for someone to do my homework, and I will accept Alex's answer as it really did answer my question (or confirm my fears). But I promise to upvote any good solutions to my problem here :) I'm using Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) Thank you for reading this far (If you skipped directly down here, I understand)

    Read the article

  • Should programmers do Pro Bono work? where are the code public defenders?

    - by Tj Kellie
    How many projects are people doing based on the Bro Bono publico ideals versus working for the highest wage or potential for a cash-in-buy-out payday? For years lawyers have been called out for excessive gathering of wealth from high bill rates and huge settlement deals, hiring out their knowledge and skills to the highest bidders. People call for them to do more for free, use the laws and their time to defend or further some cause thats in the public's best interest. Is professional software development that different? So many bright people and so much knowledge of complex systems. Do you think that there is enough of a "Pro Bono" movement to solve the social and public problems in the industry right now? If so what are the examples to point to? OLPC? NOTE: Saying that open source software is the same as pro bono misses the point completely. I was looking for specific projects with a social context, not just group-sourcing for free software. Just because your not making anyone pay for your software does not mean its doing anyone any good. I'm not calling out manual enforcement of pro bono work for programmers, really just want some objective opinions and concrete examples of social-minded software/tech development projects like the One Laptop Per Child project. I'm sure open source would be a natural tie-in for some.

    Read the article

  • How can I coordinate code review tool and RCS (specifically git)

    - by Chris Nelson
    We're committed to git for code management. We're trying to find a tool that will help us systematize code reviews. We're considering Gerrit and Code Collaborator but would welcome other suggestions. We're having a problem answering the question, "How do we know every commit was reviewed?" (Or "What commits have yet to be reviewed?") One answer would be to submit every commit or every push for review and track incomplete reviews in the review tool. I'm not entirely happy with relying on a another tool -- especially if it's not open source -- to tell us this. What seems to be a better answer is to rely on sign offs in git (e.g., "Signed-off-by: Chris Nelson") and use a hook in the review tool to sign off commits on behalf of the reviewer. And advantage of this is if we use some other review mechanism for some commits, we have just one place to look for results. One problem with this is that we can't require review before push because the review tool is unlikely to have access to the developer's private repository clone to add the sign-off. Any ideas on integrating code review with code management to achieve ease of use and high visibility of unreviewed changes?

    Read the article

  • How to organize the work when project needs to be re-implemented due to poor code quality?

    - by Dmitriy Nagirnyak
    Hi, I have joined a very small where one main developer has been buiding the web app (.NET 4.0) during ~6 months. The project should be delivered within next 2 months. After first look at the code I can say that I would never allow it to go to production (things like catch { }, not tests at all with WebForms etc). So the code quality is incredibly low. My task is to improve that and still deliver the solution. So I plan to start with unit testing and MVC2 reimplementing most of the functionality (though using some of the existing code). I estimate that I will need about 6 weeks to catch up with the current progress and be on te same functionality level as the application will be in 6 months. The problem is that the main developer who has been working on the project does not seem to be very 'professional' and skillful (he seems to be really starting in IT and many basic things are unknown to him). It will take significant amount of time and effort to educate him how to do the proper testing, development and apply some patterns. I am ready to take responsibility for the reimplemnting the application but at the same time I don't want the main developer to be on idle but as he won't be able to significantly contribute to the better-world project at this stage I am not sure what would the best way to keep productivity high for both of us. Currently I think following solution is good enough: He proceeds doing what he does until I will catch up with him and then start working on a new project together. The problem is that of course this approach is not very productive as one developer will do better-world project while the other will proceed with what he did, effectively doing similar tasks. Can you suggest how we could better organise the work together in order to be most efficient for the overall project? Thanks, Dmitriy.

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • Getting SSL to work with Apache/Passenger on OSX

    - by jonnii
    I use apache/passenger on my development machine, but need to add SSL support (something which isn't exposed through the control panel). I've done this before in production, but for some reason I can't seem to get it work on OSX. The steps I've followed so far are from a default apache osx install: Install passenger and passenger preference pane. Add my rails app (this works) Create my ca.key, server.crt and server.key as detailed on the apple website. At this point I need to start editing the apache configs, so I added: # Apache knows to listen on port 443 for ssl requests. Listen 443 Listen 80 I thought I'd try editing the passenger pref pane generated config first to get everything working, when I add: It starts off looking like this <VirtualHost *:80> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <Directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </Directory> </VirtualHost> I then append this: <VirtualHost *:443> ServerName myapp.local DocumentRoot "/Users/jonnii/programming/ruby/myapp/public" RailsEnv development <directory "/Users/jonnii/programming/ruby/myapp/public"> Order allow,deny Allow from all </directory> # SSL Configuration SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP SSLOptions +FakeBasicAuth +ExportCertData +StdEnvVars +StrictRequire #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl.key/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl.key/server.key SSLCertificateChainFile /private/etc/apache2/ssl.key/ca.crt SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0 </VirtualHost> The files referenced all exist (I doubled checked that), but now when I restart my apache I can't even get to myapp.local. However apache can still server the default page when I click on it in the sharing preference panel. Any help would be greatly appreciated.

    Read the article

  • How does real-time collaboration with multiple clients work in a system using operation transformati

    - by Saikat Chakrabarti
    I just finished reading High-Latency, Low-Bandwidth Windowing in the Jupiter Collaboration System and I mostly followed everything until part 6: global consistency. This part describes how the system described in the paper can be extended to accomodate for multiple clients connected to the server. However, the explanation is very short and essentially says the system will work if the central server merely forwards client messages to all the other clients. I don't really understand how this works though. What state vector would be sent in the message that is sent to all the other clients? Does the server maintain separate state vectors for each client? Does it maintain a separate copy of the widgets locally for each client? The simple example I can think of is this setup: imagine client A, server, and client B with client A and client B both connected to the server. To start, all three have the state object "ABCD". Then, client A sends the message "insert character F at position 0" at the same time client B sends the message "insert character G at position 0" to the server. It seems like simply relaying client A's message to client B and vice versa doesn't actually handle this case. So what exactly does the server do?

    Read the article

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

  • https not redirecting to mongrel upstream

    - by kip
    Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page". http { # passenger_root /opt/passenger-2.2.11; # passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; upstream mongrel { server 00.000.000.000:8000; server 00.000.000.000:8001; } server { listen 443; server_name domain.com; ssl on; ssl_certificate /etc/ssl/localcerts/domain_combined.crt; ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; location / { root /current/public/; index index.html index.htm; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mongrel; } } }

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Understanding OOP Principles in passing around objects/values

    - by Hans
    I'm not quite grokking a couple of things in OOP and I'm going to use a fictional understanding of SO to see if I can get help understand. So, on this page we have a question. You can comment on the question. There are also answers. You can comment on the answers. Question - comment - comment - comment Answer -comment Answer -comment -comment -comment Answer -comment -comment So, I'm imagining a very high level understanding of this type of system (in PHP, not .Net as I am not yet familiar with .Net) would be like: $question = new Question; $question->load($this_question_id); // from the URL probably echo $question->getTitle(); To load the answers, I imagine it's something like this ("A"): $answers = new Answers; $answers->loadFromQuestion($question->getID()); // or $answers->loadFromQuestion($this_question_id); while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } Or, would you do ("B"): $answers->setQuestion($question); // inject the whole obj, so we have access to all the data and public methods in $question $answers->loadFromQuestion(); // the ID would be found via $this->question->getID() instead of from the argument passed in while($answer = $answers->getAnswer()) { echo $answer->showFormatted(); } I guess my problem is, I don't know when or if I should be passing in an entire object, and when I should just be passing in a value. Passing in the entire object gives me a lot of flexibility, but it's more memory and subject to change, I'd guess (like a property or method rename). If "A" style is better, why not just use a function? OOP seems pointless here. Thanks, Hans

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >