Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 607/880 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • a question on webpage data scraping using Java

    - by Gemma
    Hi there. I am now trying to implement a simple HTML webpage scraper using Java.Now I have a small problem. Suppose I have the following HTML fragment. <div id="sr-h-left" class="sr-comp"> <a class="link-gray-underline" id="compare_header" rel="nofollow" href="javascript:i18nCompareProd('/serv/main/buyer/ProductCompare.jsp?nxtg=41980a1c051f-0942A6ADCF43B802'); " Compare Showing 1 - 30 of 1,439 matches, The data I am interested is the integer 1.439 shown at the bottom.I am just wondering how can I get that integer out of the HTML. I am now considering using a regular expression,and then use the java.util.Pattern to help get the data out,but still not very clear about the process. I would be grateful if you guys could give me some hint or idea on this data scraping. Thanks a lot.

    Read the article

  • a4j:commandButton causes full page reload on IE7

    - by Greg Charles
    Our process allows users to activate their account, and then configure e-mail preferences. We're using the tag: <a4j:commandButton id="activate" action="#{controller.agreeAction}" image="/img/ok.png" styleClass="activate-button" reRender="mainContent, sideBar" oncomplete="showEmailDialog();" /> This works fine on Firefox, but on IE, the showEmaiDialog() fires off to display the new dialog, and then the full page reloads, which instantly hides it again. I put in numerous alert() calls to make sure of what was happening. I see the e-mail dialog until I clear the final alert box in in the showEmailDialog() script, and then I see the alerts that I put into jQuery(document).ready(). Why does IE do a full page reload instead of just refreshing the requested sections?

    Read the article

  • Speech recognition with Flash or Silverlight

    - by Sebastián Grignoli
    I'm developing a web user interface to enter some information that is not very complex but needs to be loaded in real time. I think that the application could make use of speech recognition to facilitate the task. Te core of the interface is being built with Javascript and jQuery, but can easily include a flash or silverlight component. I believe that´s probably the way to go... I don't need to recognize everything that the user says, but only a few prerecorded commands. Also, I don't want the user to click on a button to specify the begining and the end of the spoken command. It should be detected live. Is there anything that does this? I would be grateful if anyone tells me about a complete solution, free or commercial, as well as any advice on capturing a sound stream from the mic and process it with flash or sliverlight. Sebastian.-

    Read the article

  • Taming the malloc/free beast -- tips & tricks

    - by roufamatic
    I've been using C on some projects for a master's degree but have never built production software with it. (.NET & Javascript are my bread and butter.) Obviously, the need to free() memory that you malloc() is critical in C. This is fine, well and good if you can do both in one routine. But as programs grow, and structs deepen, keeping track of what's been malloc'd where and what's appropriate to free gets harder and harder. I've looked around on the interwebs and only found a few generic recommendations for this. What I suspect is that some of you long-time C coders have come up with your own patterns and practices to simplify this process and keep the evil in front of you. So: how do you recommend structuring your C programs to keep dynamic allocations from becoming memory leaks?

    Read the article

  • Core dump utility for .NET

    - by Dave
    In my past life as a COBOL mainframe developer I made extensive use of a tool called Abendaid which, in the event of an exception, would give me a complete memory dump including a formatted list of every variable in memory as well as a complete stack trace of the program with the offending statement highlighted. This made pinpointing the cause of an error much simpler and saved a lot of step-through debugging and/or trace statements. Now I've made the transition to C# and .NET web development I find that the information provided by ASP.NET only tells half the story, giving me a stack trace, but not any of the variable or class information. This makes debugging more difficult as you then have to run the process again with the debugger to try and reproduce the error, not easy with intermittent errors or with assemblies that run under the likes of SQL Server or CRM. I've looked around quite a lot for something that does this but I can't find anything obvious. Does anyone have any idea if there is one, or if not, what I'd need to start with in order to write one?

    Read the article

  • Django multiple generic_inline_formset in a view

    - by Can Burak Cilingir
    We have a bunch of formsets: EmailAddressInlineFormSet = generic_inlineformset_factory( EmailAddress, extra=1, exclude=["created_by","last_modified_by"]) emailaddressformset = EmailAddressInlineFormSet( instance=person, prefix="emailaddress") # [ more definitions ] and, in the view, we process them as: emailaddressformset = EmailAddressInlineFormSet( request.POST, instance=person, prefix="emailaddress") # [ more definitions ] So, nothing fancy or unordinary. The unfortunate or unordinary fact is, we have 15 of these formsets, one for email addresses, other for phone numbers etc. so the view code is ugly and not-so-manageable. What would be the most unhackish way to handle this number of formsets in a single view? At the end -i guess- I'm looking for a class or a functionality like multiple_generic_inline_formset and open to all kind of suggestions or discussions.

    Read the article

  • Socket ping-pong performance

    - by Kamil_H
    I have written two simple programs (tried it in C++ and C#). This is pseudo code: -------- Client --------------- for(int i = 0; i < 200.000; i++) { socket_send("ping") socket_receive(buff) } --------- Server ------------- while(1) { socket_receive(buff) socket_send("pong") } I tried it on Windows. Execution time of client is about 45 seconds. Can somebody explain me why this takes so long? I understand that if there were real network connection between client and server the time of one 'ping-pong' would be: generate_ping + send_via_network + generate_pong + send_via_network but here everything is done in 'local' mode. Is there any way to make this inter process ping-pong faster using network sockets (I'm not asking about shared memory for example :) )

    Read the article

  • DirectX 9 HLSL vs. DirectX 10 HLSL: syntax the same?

    - by numerical25
    For the past month or so, I have been busting my behind trying to learn DirectX. So I've been mixing back back and forth between DirectX 9 and 10. One of the major changes I've seen in the two is how to process vectors in the graphics card. One of the drastic changes I notice is how you get the GPU to recognize your structs. In DirectX 9, you define the Flexible Vertex Formats. Your typical set up would be like this: #define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE) In DirectX 10, I believe the equivalent is the input vertex description: D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; I notice in DirectX 10 that it is more descriptive. Besides this, what are some of the drastic changes made, and is the HLSL syntax the same for both?

    Read the article

  • what's the purpose of fcntl with parameter F_DUPFD

    - by Daniel
    I traced an oracle process, and find it first open a file /etc/netconfig as file handle 11, and then duplicate it as 256 by calling fcntl with parameter F_DUPFD, and then close the original file handle 11. Later it read using file handle 256. So what's the point to duplicate the file handle? Why not just work on the original file handle? 12931: 0.0006 open("/etc/netconfig", O_RDONLY|O_LARGEFILE) = 11 12931: 0.0002 fcntl(11, F_DUPFD, 0x00000100) = 256 12931: 0.0001 close(11) = 0 12931: 0.0002 read(256, " # p r a g m a i d e n".., 1024) = 1024 12931: 0.0003 read(256, " t s t p i _ c".., 1024) = 215 12931: 0.0002 read(256, 0x106957054, 1024) = 0 12931: 0.0001 lseek(256, 0, SEEK_SET) = 0 12931: 0.0002 read(256, " # p r a g m a i d e n".., 1024) = 1024 12931: 0.0003 read(256, " t s t p i _ c".., 1024) = 215 12931: 0.0003 read(256, 0x106957054, 1024) = 0 12931: 0.0001 close(256) = 0

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • Compiling flex file into dll.

    - by Szpilona
    Hi, I've got a lexer created with flex (cygwin). Normally I compile it to an .exe file. For the newest project I need a lexer to use in a bigger C# program running on Windows XP. Of course I can execute a file using System.Diagnostics.Process. But it is not the best solution for me as I want that program to run on several machines. How can I create a dll under cygwin having the source code of the lexer? Thanks in advance... Szpilona

    Read the article

  • Passing Text for command line

    - by Kasun
    Hi, I need to pass some text which is in richtext box to command line. This is my Button click even which start the cmd. private void button1_Click(object sender, EventArgs e) { ProcessStartInfo psi = new ProcessStartInfo { FileName = "cmd", Arguments = @"/k ""C:\Program Files\Microsoft Visual Studio 9.0\VC\bin\vcvars32.bat""", }; Process.Start(psi); } In my rich text box contain following text. include iostream using namespace std; int main() { cout << "Welcome to the wonderful world of C++!!!\n"; return 0; } Can anyone provide me necessary codes.

    Read the article

  • Watching setTimeout loops so that only one is running at a time.

    - by DA
    I'm creating a content rotator in jQuery. 5 items total. Item 1 fades in, pauses 10 seconds, fades out, then item 2 fades in. Repeat. Simple enough. Using setTimeout I can call a set of functions that create a loop and will repeat the process indefinitely. I now want to add the ability to interrupt this rotator at any time by clicking on a navigation element to jump directly to one of the content items. I originally started going down the path of pinging a variable constantly (say every half second) that would check to see if a navigation element was clicked and, if so, abandon the loop, then restart the loop based on the item that was clicked. The challenge I ran into was how to actually ping a variable via a timer. The solution is to dive into JavaScript closures...which are a little over my head but definitely something I need to delve into more. However, in the process of that, I came up with an alternative option that actually seems to be better performance-wise (theoretically, at least). I have a sample running here: http://jsbin.com/uxupi/14 (It's using console.log so have fireBug running) Sample script: $(document).ready(function(){ var loopCount = 0; $('p#hello').click(function(){ loopCount++; doThatThing(loopCount); }) function doThatOtherThing(currentLoopCount) { console.log('doThatOtherThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatThing(currentLoopCount)},5000) } } function doThatThing(currentLoopCount) { console.log('doThatThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatOtherThing(currentLoopCount)},5000); } } }) The logic being that every click of the trigger element will kick off the loop passing into itself a variable equal to the current value of the global variable. That variable gets passed back and forth between the functions in the loop. Each click of the trigger also increments the global variable so that subsequent calls of the loop have a unique local variable. Then, within the loop, before the next step of each loop is called, it checks to see if the variable it has still matches the global variable. If not, it knows that a new loop has already been activated so it just ends the existing loop. Thoughts on this? Valid solution? Better options? Caveats? Dangers? UPDATE: I'm using John's suggestion below via the clearTimeout option. However, I can't quite get it to work. The logic is as such: var slideNumber = 0; var timeout = null; function startLoop(slideNumber) { ...do stuff here to set up the slide based on slideNumber... slideFadeIn() } function continueCheck(){ if (timeout != null) { // cancel the scheduled task. clearTimeout(timeout); timeout = null; return false; }else{ return true; } }; function slideFadeIn() { if (continueCheck){ // a new loop hasn't been called yet so proceed... // fade in the LI $currentListItem.fadeIn(fade, function() { if(multipleFeatures){ timeout = setTimeout(slideFadeOut,display); } }); }; function slideFadeOut() { if (continueLoop){ // a new loop hasn't been called yet so proceed... slideNumber=slideNumber+1; if(slideNumber==features.length) { slideNumber = 0; }; timeout = setTimeout(function(){startLoop(slideNumber)},100); }; startLoop(slideNumber); The above kicks of the looping. I then have navigation items that, when clicked, I want the above loop to stop, then restart with a new beginning slide: $(myNav).click(function(){ clearTimeout(timeout); timeout = null; startLoop(thisItem); }) If I comment out 'startLoop...' from the click event, it, indeed, stops the initial loop. However, if I leave that last line in, it doesn't actually stop the initial loop. Why? What happens is that both loops seem to run in parallel for a period. So, when I click my navigation, clearTimeout is called, which clears it.

    Read the article

  • How can I tell if a given hWnd is still valid?

    - by Ian P
    Please forgive my ignorance, I'm completely new when it comes to winforms programming. I'm using a third-party class that spawns an instance of Internet Explorer. This class has a property, hWnd, that returns the hWnd of the process. Later on down the line, I may want to reuse the instance of the application if it still exists, so I need to tell my helper class to attach to it. Prior to doing that, I'd like to know if the given hWnd is still valid, otherwise I'll spawn another instance. How can I do this in C# & .NET 3.5? Thanks for the help and I apologize if my winforms nomenclature is all wacky.. haha Ian

    Read the article

  • Initiate a PHP class where class name is a variable

    - by ed209
    I need some help with an error I have not encountered before and can't seem to find anywhere. In a PHP mvc framework (just from a tutorial) I have the following: // Initiate the class $className = 'Controller_' . ucfirst($controller); if (class_exists($className)) { $controller = new $className($this->registry); } $className is showing the correct class name (case is also correct). But when I run it I get this in the apache error log (no php error) [Wed Mar 31 10:34:12 2010] [notice] child pid 987 exit signal Segmentation fault (11) Process id is different on every call. I am running PHP 5.3.0 on os x 10.6. This site seems to work on 5.2.11 on another Mac. Not really sure where to go next to debug it. I guess it could be an apache setting as much as a php bug or a problem with the code... any suggestions on where to look next?

    Read the article

  • Entity Framework with File-Based Database

    - by Dave Swersky
    I am in the process of developing a desktop application that needs a database. The application is currently targeted to SQL Express 2005 and works wonderfully. However, I'm not crazy about having this dependency on SQL Express and would prefer to use a small file-based database. My problem is that I am using Entity Framework. I have tried both SQL Compact and SQLite, and they both have bizarre problems with EF v1. I get errors creating the Model, invalid models when it does get created... it's a nightmare. I'm about ready to give up and write a data layer and repository in the good-old-school Connection/Command pattern. Not my favorite plan... Is there a lightweight, file-based database out there that plays well with EF? OR Is there a better ORM tool that I should use instead of EF with my lightweight DB?

    Read the article

  • php - feof error

    - by TwixxyKit
    The file that I'm trying to read is a pgp-encrypted file. This is part of the process to decrypt it and I'm actually attempting to read the contents into a string so that I can then decrypt it . I'm not sure if that's the core problem here or not, but I'm getting an error: Warning: feof(): supplied argument is not a valid stream resource Here's the file code: if($handle = opendir($dir)) { while( false !== ($file = readdir($handle))) { if($file != "." && $file != "..") { $fhandle = fopen($file, "r"); $encrypted = ''; $filename = explode('.',$file); while(!feof($fhandle)) { $encrypted .= fread($fhandle, filesize($file)); } fclose($fhandle); $decrypted = $filename[0].'.txt'; shell_exec("echo $passphrase | $gpg --passphrase-fd 0 -o $decrypted -d $encrypted"); } } }

    Read the article

  • how to pass url in mailto's body

    - by Simer
    i need to send a url of my site in body so that user can click on that to join my site. but it is coming like this in mail client: Link goes here http://www.example.com/foo.php?this=a url after & is not coming then whole process of joining failed. how can i pass url like these in mailto body http://www.example.com/foo.php?this=a&join=abc&user454 <a href="mailto:[email protected]?body=Link goes here http://www.example.com/foo.php?this=a&amp;really=long&amp;url=with&amp;lots=and&amp;lots=and&amp;lots=of&prameters=on_it ">Link text goes here</a> i have searched alot but did't got right answer thanks

    Read the article

  • How to generate one large dependency map for the whole project that builds with makefiles?

    - by Stan
    I have a gigantic project that is built using makefiles. Running make at the root of the project takes over 20 minutes when no files have changed (i.e. just traversing the project and checking for updated files). I'd like to create a dependency map that will tell me which directories I need to run 'make' in based on the file(s) changed. I already have a list of updated files that I can get from my version control system, and I'd like to skip the 20 minutes of traversing and get straight to the locations that do need to be recompiled. The project has a mix of several languages and custom tools, so this would ideally be language-independent (i.e. it would only process all makefiles to generate dependencies). I'll settle for a C/C++-specific solution, too, as the majority of the project is in C++. The project is built on Linux.

    Read the article

  • Interview - Program on Computer

    - by Gilad Naor
    Spent some time searching, couldn't find exactly what I'm after. We like giving hands-on programming exercises as part of the interview process. The interviewee gets a laptop, with the editor and compiler of his choice. He or she then get a programming exercise and has about an hour to code it. Depending on the nature of the question, internet access is either allowed or forbidden. I'm looking for good general questions for a junior developer. I don't care what language they choose to program in. C++ is as good as Python or Scheme, as long as (s)he can program in it (this rules out "can you write a correct copy-constructor" style questions). I just want to see how they code, if their code is self-documenting, if they write tests, check edge-cases, etc. What kind of questions would you ask?

    Read the article

  • Prevent DTD download when parsing XML, redux

    - by harpo
    I have a folder full of XHTML files, and a console program to process them. The problem is, it doesn't work when I'm offline, because The remote name could not be resolved: 'www.w3.org' A number of sites, including this one, say to set the XmlResolver to null. That is not not working. I have tried setting the XmlResolver to null every way that I can think of: in the XmlDocument, in XmlTextReader, and in the XmlReaderSettings. I believe that it's just creating a default instance. I have also tried implementing my own XmlResolver, but I don't know what to return for external items. public class NonXmlResolver : XmlUrlResolver { public override object GetEntity( Uri absoluteUri, string role, Type ofObjectToReturn ) { if ( absoluteUri.Scheme == "http" ) throw new Exception("What you talking 'bout, Willis?"); else return base.GetEntity( absoluteUri, role, ofObjectToReturn ); } } The documents do not use any special entities, only pure XML. This is possible, right?

    Read the article

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • Problem in Closing Exe

    - by chandruswami
    In C#, I am using form1 as homescreen,.. while running exe from Debug folder, form1.exe appears in Process tab in task manager,... The problem is: Through form1 at runtime, i may create more instances of form1 while trying to close particular instance, the instances closed,.. but still exe appears in task manager. what i found: If i close the instances in the order what we created,.. all exe closes correctly,.. but if close the 4th instance and then 3rd instance,.. exe remains in task manager. (instead of 3rd and then 4th instance) I also used Application.Exit() but still the Exe remains in taskmanager,......

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >