Search Results

Search found 6104 results on 245 pages for 'fast esp'.

Page 204/245 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • How can I protect my .NET assemblies from decompilation?

    - by Holli
    One if the first things I learned when I started with C# was the most important one. You can decompile any .NET assembly with Reflector or other tools. Many developers are not aware of this fact and most of them are shocked when I show them their source code. Protection against decompilation is still a difficult task. I am still looking for a fast, easy and secure way to do it. I don't want to obfuscate my code so my method names will be a,b,c or so. Reflector or other tools should be unable to recognize my application as .NET assembly at all. I know about some tools already but they are very expensive. Is there any other way to protect my applications? EDIT: The reason for my question is not to prevent piracy. I only want to stop competitors from reading my code. I know they will and they already did. They even told me so. Maybe I am a bit paranoid but business rivals reading my code doesn't make me feel good.

    Read the article

  • jQuery DOM manipulation

    - by ufw
    I have different php output in jQuery-based tabs. This output is formed from database and it consists of several <div>'s. Every time I click any tab it sends AJAX request to the php script which performs "SELECT" request and gives result back as response to AJAX request from the tab. $(document).ready(function() { $('ul.tabs li').css('cursor', 'pointer'); $('ul.tabs.tabs1 li').click(function(){ var thisClass = this.className.slice(0,2); $('div.t1').hide(); $('div.t2').hide(); $('div.t3').hide(); $('div.t4').hide(); $('div.' + thisClass).show('fast'); $('ul.tabs.tabs1 li').removeClass('tab-current'); $(this).addClass('tab-current'); var data = thisClass; $.ajax({ type:"GET", url:"handler.php?name="+data, data:data, success:function(html) { $('div.' + thisClass).html(html); } }); return false; }); }); //Server-side PHP script: <?php $name = $_GET[name]; switch ($name) { case "t1": query_and_output_1(); case "t2": query_and_output_2(); // etc... } ?> The problem is that the first tab contains output that must be in second and third ones as well, the second also contains the third tab output. Sorry for such a rubbishy question, I used to work with server side and unfortunately I'm not familiar with DOM and jQuery yet. Thanks.

    Read the article

  • Project Euler problem 214, How can i make it more efficient?

    - by Once
    I am becoming more and more addicted to the Project Euler problems. However since one week I am stuck with the #214. Here is a short version of the problem: PHI() is Euler's totient function, i.e. for any given integer n, PHI(n)=numbers of k<=n for which gcd(k,n)=1. We can iterate PHI() to create a chain. For example starting from 18: PHI(18)=6 = PHI(6)=2 = PHI(2)=1. So starting from 18 we get a chain of length 4 (18,6,2,1) The problem is to calculate the sum of all primes less than 40e6 which generate a chain of length 25. I built a function that calculates the chain length of any number and I tested it for small values: it works well and fast. sum of all primes<=20 which generate a chain of length 4: 12 sum of all primes<=1000 which generate a chain of length 10: 39383 Unfortunately my algorithm doesn't scale well. When I apply it to the problem, it takes several hours to calculate... so I stop it because the Project Euler problems must be solved in less than one minute. I thought that my prime detection function might be slow so I fed the program with a list of primes <40e6 to avoid the primality test... The code runs now a little bit faster, but there is still no way to get a solution in less than a few hours (and I don't want this). So is there any "magic trick" that I am missing here ? I really don't understand how to be more efficient on this one... I am not asking for the solution, because fighting with optimization is all the fun of Project Euler. However, any small hint that could put me on the right track would be welcome. Thanks !

    Read the article

  • Is there a way to get number of connections in Signalr hub group?

    - by pajo
    Here is my problem, I want to track if user is online or offline and notify other clients about it. I'm using hubs and implemented both IConnected and IDisconnect interfaces. My idea was to send notification to all clients when hub detects connect or disconnect. By default when user refreshes page he will get new connection id and eventually previous connection will call disconnect notifying other clients user is offline even though he's actually online. I tired to use my own ConnectionIdFactory returning username for connection id but with multiple tabs opened at some point it will detect user connectionid disconnected and after that client side hub will try to unsuccessfully connect to the hub in endless loop wasting memory and cpu making browser almost unusable. I needed to fix it fast so I removed my factory and now I add every new connection to the group using username, so I can easily notify single user on all connections, but then I have problem of detecting if user is online or offline as I don't know how many active connection user is having. So I'm wondering is there a way to get number of connections in one group? Or if anybody has some better idea how to track when user goes offline? I'm using Signalr 0.4

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • How to speed up an already cached pip install?

    - by Maxime R.
    I frequently have to re-create virtual environments from a requirements.txt and I am already using $PIP_DOWNLOAD_CACHE. It still takes a lot of time and I noticed the following: Pip spends a lot of time between the following two lines: Downloading/unpacking SomePackage==1.4 (from -r requirements.txt (line 2)) Using download cache from $HOME/.pip_download_cache/cached_package.tar.gz Like ~20 seconds on average to decide it's going to use the cached package, then the install is fast. This is a lot of time when you have to install dozens of packages (actually enough to write this question). What is going on in the background? Are they some sort of integrity checks against the online package? Is there a way to speed this up? edit: Looking at: time pip install -v Django==1.4 I get: real 1m16.120s user 0m4.312s sys 0m1.280s The full output is here http://pastebin.com/e4Q2B5BA. Looks like pip is spending his time looking for a valid download link while it already has a valid cache of http://pypi.python.org/packages/source/D/Django/Django-1.4.tar.gz. Is there a way to look for the cache first and stop there if versions match?

    Read the article

  • Pros and cons of each JEE server for developing within Eclipse

    - by Thorbjørn Ravn Andersen
    Eclipse JEE has a lot of server adapters allowing development against many different application servers like JBoss, Glassfish and WebSphere. Frequently you can benefit from using another server for developing new features than for production, simply because it may be able to deploy changes much faster and when the functionality is in place, you can iron out bugs for the production platform. Unfortunately finding that server is a time consuming process, where others experience is invaluable. If you have experience with any server with an Eclipse Server Adapter, please add your findings and your recommendation. I believe that the following is of interest: Does saving a file trigger an update in the server, giving save edit+reload browser functionality? How fast is a deployment? (Saved a JSP? Java class? Static file?) Can the actual server be downloaded by the Server Adapter Wizard allowing for easy installation? Are there known bugs and issues with suitable work-arounds? Is debugging fully supported? Is profiling? Would you recommend this server? Note: Eclipse can also work with Tomcat but that is a web container, which cannot deploy EAR files.

    Read the article

  • Distributing a bundle of files across an extranet

    - by John Zwinck
    I want to be able to distribute bundles of files, about 500 MB per bundle, to all machines on a corporate "extranet" (which is basically a few LANs connected using various private mechanisms, including leased lines and VPN). The total number of hosts is roughly 100, and the goal is to get a copy of the bundle from one host onto all the other hosts reliably, quickly, and efficiently. One important issue is that some hosts are grouped together on single fast LANs in which case the network I/O should be done once from one group to the next and then within each group between all the peers. This is as opposed to a strict central server system where multiple hosts might each fetch the same bundle over a slow link, rather than once via the slow link and then between each other quickly. A new bundle will be produced every few days, and occasionally old bundles will be deleted (but that problem can be solved separately). The machines in question happen to run recent Linuxes, but bonus points will go to solutions which are at least somewhat cross-platform (in which case the bundle might differ per platform but maybe the same mechanism can be used). That's pretty much it. I'm not opposed to writing some code to handle this, but it would be preferable if it were one of bash, Python, Ruby, Lua, C, or C++.

    Read the article

  • Caching issue with javascript and asp.net

    - by Ed Woodcock
    Hi guys: I asked a question a while back on here regarding caching data for a calendar/scheduling web app, and got some good responses. However, I have now decided to change my approach and stat caching the data in javascript. I am directly caching the HTML for each day's column in the calendar grid inside the $('body').data() object, which gives very fast page load times (almost unnoticable). However, problems start to arise when the user requests data that is not yet in the cache. This data is created by the server using an ajax call, so it's asynchronous, and takes about 0.2s per week's data. My current approach is simply to block for 0.5s when the user requests information from the server, and cache 4 weeks either side in the inital page load (and 1 extra week per page change request), however I doubt this is the optimal method. Does anyone have a suggestion as to how to improve the situation? To summarise: Each week takes 0.2s to retrieve from the server, asynchronously. Performance must be as close to real-time as possible. (however the data is not needed to be fully real-time: most appointments are added by the user and so we can re-cache after this) Currently 4 weeks are cached on either side of the inial week loaded: this is not enough. to cache 1 year takes ~ 21s, this is too slow for an initial load.

    Read the article

  • Enumerate all paths in a weighted graph from A to B where path length is between C1 and C2

    - by awmross
    Given two points A and B in a weighted graph, find all paths from A to B where the length of the path is between C1 and C2. Ideally, each vertex should only be visited once, although this is not a hard requirement. I supose I could use a heuristic to sort the results of the algorithm to weed out "silly" paths (e.g. a path that just visits the same two nodes over and over again) I can think of simple brute force algorithms, but are there any more sophisticed algorithms that will make this more efficient? I can imagine as the graph grows this could become expensive. In the application I am developing, A & B are actually the same point (i.e. the path must return to the start), if that makes any difference. Note that this is an engineering problem, not a computer science problem, so I can use an algorithm that is fast but not necessarily 100% accurate. i.e. it is ok if it returns most of the possible paths, or if most of the paths returned are within the given length range.

    Read the article

  • ArrayAccess multidimensional (un)set?

    - by anomareh
    I have a class implementing ArrayAccess and I'm trying to get it to work with a multidimensional array. exists and get work. set and unset are giving me a problem though. class ArrayTest implements ArrayAccess { private $_arr = array( 'test' => array( 'bar' => 1, 'baz' => 2 ) ); public function offsetExists($name) { return isset($this->_arr[$name]); } public function offsetSet($name, $value) { $this->_arr[$name] = $value; } public function offsetGet($name) { return $this->_arr[$name]; } public function offsetUnset($name) { unset($this->_arr[$name]); } } $arrTest = new ArrayTest(); isset($arrTest['test']['bar']); // Returns TRUE echo $arrTest['test']['baz']; // Echo's 2 unset($arrTest['test']['bar']; // Error $arrTest['test']['bar'] = 5; // Error I know $_arr could just be made public so you could access it directly, but for my implementation it's not desired and is private. The last 2 lines throw an error: Notice: Indirect modification of overloaded element. I know ArrayAccess just generally doesn't work with multidimensional arrays, but is there anyway around this or any somewhat clean implementation that will allow the desired functionality? The best idea I could come up with is using a character as a separator and testing for it in set and unset and acting accordingly. Though this gets really ugly really fast if you're dealing with a variable depth. Does anyone know why exists and get work so as to maybe copy over the functionality? Thanks for any help anyone can offer.

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Method for defining simultaneous has-many and has-one associations between two models in CakePHP?

    - by Hobonium
    One thing with which I have long had problems, within the CakePHP framework, is defining simultaneous hasOne and hasMany relationships between two models. For example: BlogEntry hasMany Comment BlogEntry hasOne MostRecentComment (where MostRecentComment is the Comment with the most recent created field) Defining these relationships in the BlogEntry model properties is problematic. CakePHP's ORM implements a has-one relationship as an INNER JOIN, so as soon as there is more than one Comment, BlogEntry::find('all') calls return multiple results per BlogEntry. I've worked around these situations in the past in a few ways: Using a model callback (or, sometimes, even in the controller or view!), I've simulated a MostRecentComment with: $this->data['MostRecentComment'] = $this->data['Comment'][0]; This gets ugly fast if, say, I need to order the Comments any way other than by Comment.created. It also doesn't Cake's in-built pagination features to sort by MostRecentComment fields (e.g. sort BlogEntry results reverse-chronologically by MostRecentComment.created. Maintaining an additional foreign key, BlogEntry.most_recent_comment_id. This is annoying to maintain, and breaks Cake's ORM: the implication is BlogEntry belongsTo MostRecentComment. It works, but just looks...wrong. These solutions left much to be desired, so I sat down with this problem the other day, and worked on a better solution. I've posted my eventual solution below, but I'd be thrilled (and maybe just a little mortified) to discover there is some mind-blowingly simple solution that has escaped me this whole time. Or any other solution that meets my criteria: it must be able to sort by MostRecentComment fields at the Model::find level (ie. not just a massage of the results); it shouldn't require additional fields in the comments or blog_entries tables; it should respect the 'spirit' of the CakePHP ORM. (I'm also not sure the title of this question is as concise/informative as it could be.)

    Read the article

  • I need some pointers on how to implement inertia

    - by gargantaun
    Ok, so I've created a little plugin that takes a bunch of elements and creates a sort of never ending list. I'll try to explain... I have a div, and it's got about 20 elements tags in it. When the user scrolls up, the top element moves out of view and is moved to the bottom of the list. And vice-versa so that when the user scrolls down, the bottom element is moved to the top of the list. This is specifically for Mobile Safari (iPad, iPhone) web content and you can see the work in progress here... http://appliedworks.co.uk/files/times/SVGTests/drumView/drum.html You'll need an iPad or iPhone top see the scrolling in action. You can see the plugin code here... http://appliedworks.co.uk/files/times/SVGTests/drumView/drumView-0.1b.js What I would like to do is implement inertia so the scrolling slows to a halt in response to how fast or slow the user is scrolling when their finger leaves the screen. Just like the inertia commonly found in the iPhone / iPad UI. The problem is, every time an element moves to the top or the bottom of the list, the scollTop value for the parent div is adjusted to make it look like all the elements are staying in the same place. Which means the scrollTop value is never more than the top elements total height. So there's no value I can think of that I can keep on manipulating to give the illusion of inertia. I'm stumped. Does anyone have any suggestions?

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • how to find end of scroll value for HorizontalScrollView

    - by DroidCoder
    i need to implement following scenario, here i have two horizontalScrollViews the upper scrollView is a main menu and lower scrollView is a submenu. when i scroll main menu,the menu which comes under center arrow will show its submenu in lower scrollview and from the lower scrollview, the submenu which comes under center arrow shows the screen for that sub-menu. here's my requirement: i've implemented it using HorizontalScrollViews and ViewFlipper and also it works but it will give correct result only when i scroll it slowly and not when scroll fast. i've used onTouch() method with a ACTION_UP event,so when i release finger from screen it will gives me getScrollX() position at that point but here i need getScrollX() position when scroll is finished/stop. here's my code:- @Override public boolean onTouch(View v, MotionEvent event) { switch (v.getId()) { case R.id.mHorizontalScrollViewMain: if (event.getAction() == KeyEvent.ACTION_UP) { Log.d("test", " " + hsvUpperTab.getScrollX() + " , "+ mViewFlipper.getChildCount()); getUpperTabScrolled(hsvUpperTab.getScrollX()); } break; case R.id.mHorizontalScrollViewSub: if (event.getAction() == KeyEvent.ACTION_UP) { Log.d("test1", " " + hsvLowerTab.getScrollX()); getLowerTabScrolled(hsvLowerTab.getScrollX()); } default: break; } return false; }

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Using git filter-branch to remove commits by their commit message

    - by machineghost
    In our repository we have a convention where every commit message starts with a certain pattern: Redmine #555: SOME_MESSAGE We also do a bit of rebasing to bring in the potential release branch's changes to a specific issue's branch. In other words, I might have branch "foo-555", but before I merge it in to branch "pre-release" I need to get any commits that pre-release has that foo-555 doesn't (so that foo-555 can fast-forward merge in to pre-release). However, because pre-release sometimes changes, we sometimes wind up with situations where you bring in a commit from pre-release, but then that commit later gets removed from pre-release. It's easy to identify commits that came from pre-release, because the number from their commit message won't match the branch number; for instance, if I see "Redmine #123: ..." in my foo-555 branch, I know that its not a commit from my branch. So now the question: I'd like to remove all of the commits that "don't belong" to a branch; in other words, any commit that: Is in my foo-555 branch, but not in the pre-release branch (pre-release..foo-555) Has a commit message that doesn't start with "Redmine #555" but of course "555" will vary from branch to branch. Is there any way to use filter-branch (or any other tool) to accomplish this? Currently the only way I can see to do it is to do go an interactive rebase ("git rebase -i") and manually remove all the "bad" commits.

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • Sequential animations in Jquery

    - by Pickels
    I see a lot people struggling with this(me included). I guess it's mostly due to not perfectly knowing how Javascript scopes work. An image slideshow is a good example of this. So lets say you have series of images, the first one fades in = waits = fades out = next image. When I first created this I was already a little lost. I think the biggest problem for creating the basics is to keep it clean. I noticed working with callbacks can get uggly pretty fast. So to make it even more complicated most slideshows have control buttons. Next, previous, go to img, pause, ... I've been trying this for a while now and this is where I got: $(InitSlideshow); function InitSlideshow() { var img = $("img").hide(); var animate = { wait: undefined, start: function() { img.fadeIn(function() { animate.middle(); }); }, middle: function() { animate.wait = setTimeout(function() { animate.end(); }, 1000); }, end: function() { img.fadeOut(function() { animate.start(); }); }, stop: function() { img.stop(); clearTimeout(animate.wait); } }; $("#btStart").click(animate.start); $("#btStop").click(animate.stop); }; This code works(I think) but I wonder how other people deal with sequentials animations in Jquery? Tips and tricks are most welcome. So are links to good resources dealing with this issue. If this has been asked before I will delete this question. I didn't find a lot of info about this right away. I hope making this community wiki is correct as there is not one correct answer to my question. Kind regards, Pickels

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Trying to write up a C daemon, but don't know enough C to continue

    - by JamesM-SiteGen
    Okay, so I want this daemon to run in the background with little to no interaction. I plan to have it work with Apache, lighttpd, etc to send the session & request information to allow C to generate a website from an object DB, saving will have to be an option so, you can start the daemon with an existing DB but will not save to it unless you login to the admin area and enable, and restart the daemon. Summary of the daemon: Load a database from file. Have a function to restart the daemon. Allow Apache, lighttpd, etc to get necessary data about the request and session. A varible to allow the database to be saved to the file, otherwise it will only be stored in ram. If it is set to save back to the file, then only have neccessary data in ram. Use sql-light for the database file. Build a webpage from some template files. $(myVar) for getting variables. Get templates from a directory. ./templates/01-test/{index.html,template.css,template.js} Live version of the code and more information: http://typewith.me/YbGB1h1g1p Also I am working on a website CMS in php, but I am tring to switch to C as it is faster than php. ( php is quite fast, but the fact that making a few mySQL requests for every webpage is quite unefficent and I'm sure that it can be far better, so an object that we can recall data from in C would have to be faster ) P.S I am using Arch-Linux not MS-Windows, with the package group base-devel for the common developer tools such as make and makepgk. Edit: Oupps, forgot the question ;) Okay, so the question is, how can I turn this basic C daemon into a base to what I am attempting to do here?

    Read the article

  • how to get access to private members of nested class?

    - by macias
    Background: I have enclosed (parent) class E with nested class N with several instances of N in E. In the enclosed (parent) class I am doing some calculations and I am setting the values for each instance of nested class. Something like this: n1.field1 = ...; n1.field2 = ...; n1.field3 = ...; n2.field1 = ...; ... It is one big eval method (in parent class). My intention is -- since all calculations are in parent class (they cannot be done per nested instance because it would make code more complicated) -- make the setters only available to parent class and getters public. And now there is a problem: when I make the setters private, parent class cannot acces them when I make them public, everybody can change the values and C# does not have friend concept I cannot pass values in constructor because lazy evaluation mechanism is used (so the instances have to be created when referencing them -- I create all objects and the calculation is triggered on demand) I am stuck -- how to do this (limit access up to parent class, no more, no less)? I suspect I'll get answer-question first -- "but why you don't split the evaluation per each field" -- so I answer this by example: how do you calculate min and max value of a collection? In a fast way? The answer is -- in one pass. This is why I have one eval function which does calculations and sets all fields at once.

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >