Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 172/293 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • LogCat acting patchy

    - by OceanBlue
    The LogCat output in eclipse suddenly started weird. It sometimes shows all the output as expected. Other times it just doesn't show anything - is completely blank. After some internet research, I thought Mylyn plugin might be responsible. So I tried de-selecting "Mylyn Tasks UI" & "Mylyn Team UI" on eclipse Window -- Preferences -- General -- Startup and Shutdown -- programs that are started at startup. No Luck. Has anyone else encountered & solved this problem?

    Read the article

  • How sophisticated should be DAL?

    - by Andrew Florko
    Basically, DAL (Data Access Layer) should provide simple CRUD (Create/Read/Update/Delete) methods but I always have a temptation to create more sophisticated methods in order to minimize database access roundtrips from Business Logic Layer. What do you think about following extensions to CRUD (most of them are OK I suppose): Read: GetById, GetByName, GetPaged, GetByFilter... e.t.c. methods Create: GetOrCreate methods (model entity is returned from DB or created if not found and returned), Create(lots-of-relations) instead of Create and multiple AssignTo methods calls Update: Merge methods (entities list are updated, created and deleted in one call) Delete: Delete(bool children) - optional children delete, Cleanup methods Where do you usually implement Entity Cache capabilities? DAL or BLL? (My choice is BLL, but I have seen DAL implementations also) Where is the boundary when you decide: this operation is too specific so I should implement it in Business Logic Layer as DAL multiple calls? I often found insufficient BLL operations that were implemented in dozen database roundtrips because developer was afraid to create a bit more sophisticated DAL. Thank you in advance!

    Read the article

  • System.UnauthorizedAccessException from Serial Port in VB.NET

    - by psuhas
    I am using VB.NET 2008 Express Edition to access Serial Port which is a USB to Serial port. Since this is removable, the app user can disconnect it at any time in app. I am getting an unhandled exception when I remove the USB Serial Port. After research, it seems like a known problem in .NET (even in 3.5) I am looking for some solution to get this done. I have already tried the app.config solution that was suggested and it does not work Here is the link for issue http://connect.microsoft.com/VisualStudio/feedback/Validation.aspx?FeedbackID=140018

    Read the article

  • Invoke a cleanup method for java user thread, when JVM stops the thread

    - by user309281
    Hi All I have J2SE application running in linux. I have stop application script in which i am doing kill of the J2SE pid. This J2SE application has 6 infinitely running user threads,which will be polling for some specific records in backend DB. When this java pid is killed, I need to perform some cleanup operations for each of the long running thread, like connecting to DB and set status of some transactions which are in-progress to empty. Is there a way to write a method in each of the thread, which will be called when the thread is going to be stopped, by JVM.

    Read the article

  • Changing timezone without changing time in Java

    - by Martin
    Hi! I'm receiving a datetime from a SOAP webservice without timzone information. Hence, the Axis deserializer assumes UTC. However, the datetime really is in Sydney time. I've solved the problem by substracting the timezone offset: Calendar trade_date = trade.getTradeDateTime(); TimeZone est_tz = TimeZone.getTimeZone("Australia/Sydney"); long millis = trade_date.getTimeInMillis() - est_tz.getRawOffset(); trade_date.setTimeZone( est_tz ); trade_date.setTimeInMillis( millis ); However, I'm not sure if this solution also takes daylight saving into account. I think it should, because all operations are on UTC time. Any experiences with manipulating time in Java? Better ideas on how to solve this problem?

    Read the article

  • C++: A variation of priority queue

    - by Helltone
    I need some kind of priority queue to store pairs <key, value>. Values are unique, but keys aren't. I will be performing the following operations (most common first): random insertion; retrieving (and removing) all elements with the least key. random removal (by value); I can't use std::priority_queue because it only supports removing the head. For now, I'm using an unsorted std::list. Insertion is performed by just pushing new elements to the back (O(1)). Operation 2 sorts the list with std::sort (O(N*logN)), before performing the actual retrieval. Removal, however, is O(n), which is a bit expensive. Any idea of a better data structure?

    Read the article

  • Testing perceived performance

    - by Josh Kelley
    I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing. Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...). I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.

    Read the article

  • 'e-Commerce' scalable database model

    - by Ruben Trancoso
    I would like to understand database scalability so I've just heard a talk about Habits of Highly Scalable Web Applications http://techportal.ibuildings.com/2010/03/02/habits-of-highly-scalable-web-applications/ On it, the presenter mainly talk about relational database scalability. I also have read something about MapReduce and Column oriented tables, big tables, hypertable etc... trying to understand which are the most up to date methods to scale web application data. But the second group, to me, is being hard to understand where it fits. It serves as transactional, reliable data store? or not, its just for large access and processing and to handle fine graned operations we will ever need to rely on RDBMSs? Could someone give a comprehensive landscape for those new technologies and how to use it?

    Read the article

  • Avoiding dispose of underlying stream

    - by danbystrom
    I'm attempting to mock some file operations. In the "real" object I have: StreamWriter createFile( string name ) { return new StreamWriter( Path.Combine( _outFolder, name ), false, Encoding.UTF8 ) ); } In the mock object I'd like to have: StreamWriter createFile( string name ) { var ms = new MemoryStream(); _files.Add( Path.Combine( _outFolder, name ), ms ); return new StreamWriter( ms, Encoding.UTF8 ) ); } where _files is a dictionary to store created files for later inspection. However, when the consumer closes the StreamWriter, it also disposes the MeamoryStream... :-( Any thoughts on how to pursue this?

    Read the article

  • Clustered index dilemma - ID or sort?

    - by richardtallent
    I have a table with two very important fields: id INT identity(1,1) PRIMARY KEY identifiersortcode VARCHAR(900) My app always sorts and pages search results in the UI based on identifiersortcode, but all table joins (and they are legion) are on the id field. (Aside: yes, the sort code really is that long. There's a strong BL reason.) Also, due to O/RM use, most SELECT statements are going to pull almost every column. Currently, the clustered index is on id, but I'm wondering if the TOP / ORDER BY portion of most queries would make identifiersortcode a more attractive option as the clustered key, even considering all of the table joins going on. Inserts on the table and changes to the identifiersortcode are limited enough that changing my clustered index would be a problem for insert/update operations. Trying to make the sort code's non-clustered index a covering index (using INCLUDE) is not a good option. There are a number of large columns, and some of them have a lot of update activity.

    Read the article

  • Pros and Cons on where to place business logic: app level or DB

    - by Juri
    Hi, I always again encounter discussions about where to place the business logic: inside a business layer in the application code or down in the DB in terms of stored procedures. Personally I'd tend to the 1st approach, but I'd like to hear some opinions from your part first, without influencing you with my personal views. I know there doesn't exist a one-size-fits-all solution and it often depends on many factors, but we can discuss about that. Btw, we are in the context of web applications and our current approach is to have UI layer which accepts UI input and does a first, client-side validation Business layer with a number of service-classes which contains the business logic including validation for user input (server-side) Data Access Layer which calls stored procedures from the DB for doing persistency/read operations Many people however tend to move the business layer stuff (especially regarding the validation) down to the DB in terms of stored procedures. What do you think about it? I'd like to discuss.

    Read the article

  • Php PEAR, Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a special-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: 1.) Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? 2.) Can PEAR interface with the MySqli prepared statements I currently use? 3.) Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) 4.) Will it slow down my site once I have a significantly large member base?

    Read the article

  • What is the difference between File and FileInfo in C# ?

    - by Lilitu88
    I've been reading that the static methods of the File Class are better used to perform small and few tasks on a file like checking to see if it exists and that we should use an instance of the FileInfo Class if we are going to perform many operations on a specific file. I understand this and can simply use it that way blindly but I would like to know why is there a difference ? What is it about the way they work that make them suitable for different situations ? What is the point of having this 2 different classes that seem do the same in different ways ? It would be helpful if someone could answer at least one of this questions.

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • Rack::ResponseHeaders in rackup for Sinatra

    - by sevennineteen
    I think this is a very easy one, but I can't seem to get it right. Basically, I'm trying to use Rack middleware to set a default Cache-Control header into all responses served by my Sinatra app. It looks like Rack::ResponseHeaders should be able to do exactly what I need, but I get an error when attempting to use the syntax demonstrated here in my rackup file: use Rack::ResponseHeaders do |headers| headers['X-Foo'] = 'bar' headers.delete('X-Baz') end I was able to get Rack::Cache to work successfully as follows: use Rack::Cache, :default_ttl => 3600 However, this doesn't achieve exactly the output I want, whereas Rack::ResponseHeaders gives fine-grained control of the headers. FYI, my site is hosted on Heroku, and the required Rack gems are specified in my .gems manifest. Thanks! Update: After doing some research, it looks like the first issue is that Rack::ResponseHeaders is not found in the version of rack-contrib (0.9.2) which was installed. I'll start by looking into that.

    Read the article

  • Are any of these quad-tree libraries any good?

    - by Noctis Skytower
    It appears that a certain project of mine will require the use of quad-trees, something that I have never worked with before. From what I have read they should allow substantial performance enhancements than a brute-force attempt at the problem would yield. Are any of these python modules any good? Quadtree 0.1.2 <= No: unable to execute in Python 3.1 QuadTree <= Yes: simple while working with rectangles quadtree.py <= No: no support for needed operations EDIT: Does anyone know of a better implementation that the one presented on the pygame wiki article?

    Read the article

  • Can "\Device\NamedPipe\\Win32Pipes" handles cause "Too many open files" error?

    - by Igor Oks
    Continuing from this question: When I am trying to do fopen on Windows, I get a "Too many open files" error. I tried to analyze, how many open files I have, and seems like not too much. But when I executed Process Explorer, I noticed that I have many open handles with similar names: "\Device\NamedPipe\Win32Pipes.00000590.000000e2", "\Device\NamedPipe\Win32Pipes.00000590.000000e3", etc. I see that the number of these handles is exactly equal to the number of the iterations that my program executed, before it returned "Too many open files" and stopped. I am looking for an answer, what are these handles, and could they actually cause the "Too many open files" error? In my program I am loading files from remote drive, and I am creating TCP/IP connections. Could one of these operations create these handles?

    Read the article

  • Zlib not available in OS X?

    - by Tylo
    I am trying to install a python library and receive this error after downloading an egg file. Downloading http://pypi.python.org/packages/2.5/s/setuptools/setuptools-0.6c7-py2.5.egg Traceback (most recent call last): File "setup.py", line 10, in <module> use_setuptools(min_version=min_version) File "/Users/tylo/Downloads/Archives/simplejson-2.0.9/ez_setup.py", line 88, in use_setuptools import setuptools; setuptools.bootstrap_install_from = egg zipimport.ZipImportError: can't decompress data; zlib not available I did some research and discovered that zlib is built into OS X. What could be going wrong here?

    Read the article

  • Run shell script using fabric and piping script text to shell's stdin

    - by Peter Lyons
    Is there a way to execute a multi-line shell script by piping it to the remote shell's standard input in fabric? Or must I always write it to the remote filesystem, then run it, then delete it? I like sending to stdin as it avoids the temporary file. If there's no fabric API (and it seems like there is not based on my research), presumably I can just use the ssh module directly. Basically I wish fabric.api.run was not limited to a 1-line command that gets passed to the shell as a command line argument, but instead would take a full multi-line script and write it to the remote shell's standard input.

    Read the article

  • Recommended way to manage persistent PHP script processes?

    - by BigglesZX
    First off - hello, this is my first Stack Overflow question so I'll try my best to communicate properly. The title of my question may be a bit ambiguous so let me expand upon it immediately: I'm planning a project which involves taking data inputs from several "streaming" APIs, Twitter being one example. I've got a basic script coded up in PHP which runs indefinitely from the command line, taking input from the Twitter streaming API and doing very basic things with it. My eventual objective is to have several such processes running (perhaps daemonized using the System Daemon PEAR class), and I would like to be able to manage them from some governing process (also a PHP script). By manage I mean basic operations such as stop/start and (most crucially) automatically restarting a process that crashes. I would appreciate any pointers on how best to approach this process management angle. Again, apologies if this question is too expansive - tips on more tightly focused lines of enquiry would be appreciated if necessary. Thanks for reading and I look forward to your answers.

    Read the article

  • Designing DAL in .NET to be "data-source independent" and not just "database independent" ?

    - by Munish Goyal
    How to design such flexible DAL (specifically in .NET) ? What interfaces .NET provides and what should be done on my own ? Its a greenfield project starting with SQL Server as data source but in future, parts of it will move to different NoSQL type of datastores. Also, we may need to experiment with lot of different datastores (like some data may have to go with Cassandra, some with RDBMS, some to other DHT etc.) Therefore easily switchable access layer will be needed. All i know right now is the 'data' and 'operations needed on that data'.

    Read the article

  • tool for adding parentheses to equations?

    - by jedierikb
    Is there an online tool for adding parentheses to simple math equations? For example, a + b * c into a + (b * c) Those who paid more attention in math class might be able to tackle order of operations for huge equations in their head, but I could often use some help (and verification of my thinking). I often encounter other people's libraries having equations and functions I need for my code, and this would be kind of helpful for debugging and understanding. I was hoping Wolfram Alpha would do this, but the output is not easy to plug back into most programming languages e.g. a + (bc)

    Read the article

  • Need help getting Suspend to work in Ubuntu on laptop

    - by Aerik
    I've been doing a lot of research, but I've got to admit right out front that I'm not even sure exactly what is the right question. I've installed Kubuntu 10.4 on a Panasonic Toughbook CF-29. When I try to uses "suspend", the screen flickers, and then the hard drive light goes off but the power light stays on. I looked at the /var/log/pm-suspend.log, but I don't seen any errors... though I'm not sure what success should look like either. So I guess my real question is a bit more accurately stated as "How do I troubleshoot suspend not working in Kubuntu on a laptop?" Thanks, Aerik

    Read the article

  • What libraries are available for manipulating super large images in .Net

    - by tpower
    I have some really large files for example 320 MB tif file with 14000 X 9000 pixels. The operations I need to perform are basically scaling the images to get smaller versions of it and breaking the image into tiles. My code works fine with small files and I use the .Net Bitmap objects but I will occasionally get Out of Memory exceptions for larger files. I've tried using the FreeImage libraries FreeImageBitmap but have the same problems. I'm using something like the following to scale the image: using (Graphics g = Graphics.FromImage((Image)result)) { g.DrawImage( source, xOffset, yOffset, source.Width * scale, source.Height * scale ); } Ideally I'd like a third party library to do all the hardwork, but if you have any tips or resources with more information I would appreciate it.

    Read the article

  • Developer oriented hardware benchmarks?

    - by Promit
    Perhaps I'm looking in the wrong places, but every hardware benchmark I've found, for nearly any component, is oriented towards gamers and/or workstations (video editing etc). Is there anyone doing benchmarks that are relevant to software developers? For example, take SSDs. I don't care how fast Crysis loads off an SSD -- that is completely worthless information. What I want to know is, which drive yields the quickest build times? What about Intellisense and refactoring operations? What RAID configuration has the biggest benefit? I could probably come up with more examples, but you get the point. Long story short, where are the benchmarks that tell me which hardware will be most effective in helping me be a productive software developer?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >