Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 456/1952 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Reading a memorystream

    - by user1842828
    Using several examples here on StackOverflow I thought the following code would decompress a gzip file then read the memory-stream and write it's content to the console. No errors occur but I get no output. public static void Decompress(FileInfo fileToDecompress) { using (FileStream originalFileStream = fileToDecompress.OpenRead()) { string currentFileName = fileToDecompress.FullName; string newFileName = currentFileName.Remove(currentFileName.Length - fileToDecompress.Extension.Length); using (FileStream decompressedFileStream = File.Create(newFileName)) { using (GZipStream decompressionStream = new GZipStream(originalFileStream, CompressionMode.Decompress)) { MemoryStream memStream = new MemoryStream(); memStream.SetLength(decompressedFileStream.Length); decompressedFileStream.Read(memStream.GetBuffer(), 0, (int)decompressedFileStream.Length); memStream.Position = 0; var sr = new StreamReader(memStream); var myStr = sr.ReadToEnd(); Console.WriteLine("Stream Output: " + myStr); } } } }

    Read the article

  • How can i connect two or more machines via tcp cable to form a network grid?

    - by Gath
    How can i connect two or more machines to form a network grid and how can i distribute work load to the two machines? What operating systems do i need to run on the machines, and what application should i use to manage the load balancing? NB: I read somewhere that google uses cheap machines to perform this fete, how do they connect two network cards( 'Teaming' ) and distribute load across the machines? Good practical examples would serve me good, with actual code samples. Pointers to some good site i might read this stuff will be highly appreciated.

    Read the article

  • SQL Server Mapping a user to a login and adding roles programmatically

    - by user163457
    In my SQL Server 2005 server I create databases and logins using Management Studio. My application requires that I give a newly created user read and write permissions to another database. To do this I right-click the newly created login, select properties and go to User Mapping. I put a check beside the database to map this login to the db and select db_datareader and db_datawriter as the roles to map. Can this be done programmatically? I've read about using Alter User and sp_change_users_login but I'm having problems getting these to work, since sp_change_users_login has been deprecated so I'd prefer to use Alter User. Please note my understanding of SQL Server database users/logins/roles is basic

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • Noise with multi-threaded raytracer

    - by herber88
    This is my first multi-threaded implementation, so it's probably a beginners mistake. The threads handle the rendering of every second row of pixels (so all rendering is handled within each thread). The problem persists if the threads render the upper and lower parts of the screen respectively. Both threads read from the same variables, can this cause any problems? From what I've understood only writing can cause concurrency problems... Can calling the same functions cause any concurrency problems? And again, from what I've understood this shouldn't be a problem... The only time both threads write to the same variable is when saving the calculated pixel color. This is stored in an array, but they never write to the same indices in that array. Can this cause a problem? Multi-threaded rendered image (Spam prevention stops me from posting images directly..) Ps. I use the exactly same implementation in both cases, the ONLY difference is a single vs. two threads created for the rendering.

    Read the article

  • How do you do real time document tracking?

    - by Nimish
    I was considering diff Document Tracking options and came across DocTracking.com. DocTracking.com allows you to upload documents (PDF Word etc) and adds some kind of invisible tracking to it and returns the document to you which can then be used just like you would use the document otherwise. This tracking tells you when your documents were opened, who opened them (IP), geo-location of opening if they are re-opened or forwarded, what pages were read and how long it was read for, what was printed. Any leads on how this could be done would be appreciated.

    Read the article

  • please give me a solution

    - by user327832
    here is the code i have written so far but ended up giving me error import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; public class Main { public static void main(String[] args) throws Exception { File file = new File("c:\\filea.txt"); InputStream is = new FileInputStream(file); long length = file.length(); System.out.println (length); bytes[] bytes = new bytes[(int) length]; try { int offset = 0; int numRead = 0; while (numRead >= 0) { numRead = is.read(bytes); } } catch (IOException e) { System.out.println ("Could not completely read file " + file.getName()); } is.close(); Object[] see = new Object[(int) length]; see[1] = bytes; System.out.println ((String[])see[1]); } }

    Read the article

  • streaming xml pretty printer in C/C++ using expat or libxml2?

    - by Mark Zeren
    I have a library that outputs xml without whitespace all on one line. In some cases I'd like to pretty print that output. I'm looking for a BSD-ish licensed C/C++ library or sample code that will take a raw xml byte stream and pretty print it. Here's some pseudo code showing one way that I might use this functionality: void my_write(const char* buf, int len); PrettyPrinter pp(bind(&my_write)); while (...) { // ... get some more xml ... const char* buf = xmlSource.get_buf(); int len = xmlSource.get_buf_len(); int written = pp.write(buf, len); // calls my_write with pretty printed xml // ... error handling, maybe call write again, etc. ... } I'd like to avoid instantiating a DOM representation. I already have dependencies on the expat and libxml2 shared libraries, and I'd rather not add any more shared library dependencies.

    Read the article

  • Teacher confused about MVC?

    - by Gideon
    I have an assignment to create a game in java using MVC as a pattern. The thing is that stuff I read about MVC aren't really what the teacher is telling me. What I read is that the Model are the information objects, they are manipulated by the controllers. So in a game the controller mutates the placement of the objects and check if there is any collision etc. What my teacher told me is that I should put everything that is universal to the platform in the models and the controllers should only tell the model which input is given. That means the game loop will be in a model class, but also collision checks etc. So what I get from his story is that the View is the screen, the Controller is the unput handeler, and the Model is the rest. Can someone point me in the right direction?

    Read the article

  • A good F# codebase to learn from

    - by Lucas
    Hi all, I've been teaching myself F# for a while now. I've read Programming F# by Chris Smith (great book) and I've written a few small scripts for getting the job done here and there. But IMO the best way to learn a new programming language—and more importantly, the idioms that come with it—is to read a good open source codebase written in that language. Naturally, writing code in that language is crucial, but in the beginning, you're basically struggling with your own ignorance about how things should be done. You could perform certain tasks one way or the other, but it takes experience to realize the flaws and virtues of each. Even after you've gotten a firm grasp of how things work, reading the code of people who have an even firmer one helps a great deal. Most would agree that the most insightful parts of any learn-a-programming-language book are the code examples, and reading a well-written open source codebase is the next level of that. So are there any out there for F#?

    Read the article

  • Kernel dealing with the section headers in an ELF

    - by uki
    I recently read that the kernel and the dynamic loader mostly deal with the program header tables in an ELF file and that assemblers, compilers and linkers deal with the section header tables. The number of program header tables and section header tables are mentioned in the ELF header in fields named e_phnum and e_shnum respectively. e_phnum is two bytes in size, so if the number of program headers is 65535, we use a scheme known as extended numbering where, e_phnum is set to 0xffff and sh_link field of the zeroth section header table holds the actual count. My doubt is : If the count of program headers exceeds 65535, does that mean the kernel and/or the dynamic loader end up having to read the section table?

    Read the article

  • Just for fun (C# and C++)...time yourself [closed]

    - by Ted
    Possible Duplicate: What is your solution to the FizzBuzz problem? OK guys this is just for fun, no flamming allowed ! I was reading the following http://www.codinghorror.com/blog/2007/02/why-cant-programmers-program.html and couldn't believe the following sentence... " I've also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution." For those that can't be bothered to read the article, the background is this: ....I set out to develop questions that can identify this kind of developer and came up with a class of questions I call "FizzBuzz Questions" named after a game children often play (or are made to play) in schools in the UK. An example of a Fizz-Buzz question is the following: Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". SO I decided to test myself. I took 5 minutes in C++ and 3mins in c#! So just for fun try it and post your timings + language used! P.S NO UNIT TESTS REQUIRED, NO OUTSOURCING ALLOWED, SWITCH OFF RESHARPER! :-) P.S. If you'd like to post your source then feel free

    Read the article

  • how to control textbox type to double in visual basic?

    - by fema
    Hi, I'd like to make a textbox that accepts only numbers, but not integer, but rather double. I've read here about e.Handled = Not Char.IsDigit(e.KeyChar) and it works, but again, it can be used only for integer, since it declines decimal point. Another thing I've read here is If Not Double.TryParse(TextBox2.Text, value) Then .... and it would work fine, except that it allows only decimal comma instead of point. I don't know whether it's because of my location settings (Hungary, we use commas instead of points), but I don't have any other idea how to solve my problem, and the SQL server I send my data uses decimal point. Thanks in advance.

    Read the article

  • Has jQuery core development been slowing down?

    - by David Murdoch
    So, I regularly head over to jQuery's Commit History on GitHub just to read through the new code committed to jQuery core. But there hasn't been anything new committed since April 24th. I've already read through jQuery core a few times and I'm pretty familiar with it which is why I like reading the commits. I just like to see what changed, why it was changed, etc. Why has there been a slow down in jQuery commits on GitHub? Anyone else have some recommendations for where I can go to view good javascript code being developed? My motive for reading jQuery's commit history is similar to the reasons I browse through accepted answers here on stackoverflow - to learn from people smarter than me. With that said, I am interested in the answer to this questions title, but I am more interested in finding a substitute to reading the jQuery commits.

    Read the article

  • Seam + hibernate + jsf on weblogic

    - by Kiva
    I'm making a little project with Seam, Hibernate and JSF. This project run on JBoss 5.1. My boss wants to deploy this project on WebLogic. I read on the seam documentation that seam and WebLogic don't work fine together. I would like to know if I can use Hibernate (with JPA) and JSF on WebLogic and what framework (struts, spring?) I can use to replace Seam. Edit: I read in the seam documentation (chapter 39, weblogic integration) and I find that: For several releases of Weblogic there has been an issue with how Weblogic generates stubs and compiles EJB's that use variable arguments in their methods. This is confirmed in the Weblogic 9.X and 10.0.MP1 versions. Unfortunately the 10.3 version only partially addresses the issue as detailed below. So, I want to know if other problems like this exist. Edit 2: I use Weblogic 10.3

    Read the article

  • Query returning related assets

    - by GMo
    I have 2 tables, one is an assets table which holds digital assets (e.g. article, images etc), the 2nd table is an asset_links table which maps 1-1 relationships between assets contained within the assets table. Here are the table definitions: Asset +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | source | varchar(255) | YES | | NULL | | | title | varchar(255) | YES | | NULL | | | date_created | datetime | YES | | NULL | | | date_embargo | datetime | YES | | NULL | | | date_expires | datetime | YES | | NULL | | | date_updated | datetime | YES | | NULL | | | keywords | varchar(255) | YES | | NULL | | | status | int(11) | YES | | NULL | | | priority | int(11) | YES | | NULL | | | fk_site | int(11) | YES | MUL | NULL | | | resource_type | varchar(255) | YES | | NULL | | | resource_id | int(11) | YES | | NULL | | | fk_user | int(11) | YES | MUL | NULL | | +---------------+--------------+------+-----+---------+----------------+ Asset_links +-----------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | asset_id1 | int(11) | YES | | NULL | | | asset_id2 | int(11) | YES | | NULL | | +-----------+---------+------+-----+---------+----------------+ In the asset_links table there are the following rows: 1 - 3, 1 - 4, 2 - 10, 2 - 56 I am looking to write one query which will return all assets which satisfy any asset search criteria and within the same query return all of the linked asset data for linked assets for that asset. e.g. The query returning assets 1 and 2 would return : Asset 1 attributes - Asset 3 attributes - Asset 4 attributes Asset 2 attributes - Asset 10 attributes - Asset 56 attributes What is the best way to write the query?

    Read the article

  • Python logging in Django

    - by Jeff
    I'm developing a Django app, and I'm trying to use Python's logging module for error/trace logging. Ideally I'd like to have different loggers configured for different areas of the site. So far I've got all of this working, but one thing has me scratching my head. I have the root logger going to sys.stderr, and I have configured another logger to write to a file. This is in my settings.py file: sviewlog = logging.getLogger('MyApp.views.scans') view_log_handler = logging.FileHandler('C:\\MyApp\\logs\\scan_log.log') view_log_handler.setLevel(logging.INFO) view_log_handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')) sviewlog.addHandler(view_log_handler) Seems pretty simple. Here's the problem, though: whatever I write to the sviewlog gets written to the log file twice. The root logger only prints it once. It's like addHandler() is being called twice. And when I put my code through a debugger, this is exactly what I see. The code in settings.py is getting executed twice, so two FileHandlers are created and added to the same logger instance. But why? And how do I get around this? Can anyone tell me what's going on here? I've tried moving the sviewlog logger/handler instantiation code to the file where it's used (since that actually seems like the appropriate place to me), but I have the same problem there. Most of the examples I've seen online use only the root logger, and I'd prefer to have multiple loggers.

    Read the article

  • Do you use a grid system when designing a web page?

    - by johnny
    I'm trying to figure out why I would use a grid system. I have read some but I just don't get it. I'm used to just putting stuff in html on a page and beind done with it but I have a new project and would like to use a grid because apparently it is a best practice. I read in one article referenced in another SO question and it said that grid design was in all sorts of development, even application form design. That made me think of things like snap to grid, etc. and I didn't know if the grid in the web design sphere was the same. I was hoping someone could give me a brief but not overly complicated view and not a link to Google which I have used already. Thank you for any help.

    Read the article

  • How should I implement simple caches with concurrency on Redis?

    - by solublefish
    Background I have a 2-tier web service - just my app server and an RDBMS. I want to move to a pool of identical app servers behind a load balancer. I currently cache a bunch of objects in-process. I hope to move them to a shared Redis. I have a dozen or so caches of simple, small-sized business objects. For example, I have a set of Foos. Each Foo has a unique FooId and an OwnerId. One "owner" may own multiple Foos. In a traditional RDBMS this is just a table with an index on the PK FooId and one on OwnerId. I'm caching this in one process simply: Dictionary<int,Foo> _cacheFooById; Dictionary<int,HashSet<int>> _indexFooIdsByOwnerId; Reads come straight from here, and writes go here and to the RDBMS. I usually have this invariant: "For a given group [say by OwnerId], the whole group is in cache or none of it is." So when I cache miss on a Foo, I pull that Foo and all the owner's other Foos from the RDBMS. Updates make sure to keep the index up to date and respect the invariant. When an owner calls GetMyFoos I never have to worry that some are cached and some aren't. What I did already The first/simplest answer seems to be to use plain ol' SET and GET with a composite key and json value: SET( "ServiceCache:Foo:" + theFoo.Id, JsonSerialize(theFoo)); I later decided I liked: HSET( "ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); That lets me get all the values in one cache as HVALS. It also felt right - I'm literally moving hashtables to Redis, so perhaps my top-level items should be hashes. This works to first order. If my high-level code is like: UpdateCache(myFoo); AddToIndex(myFoo); That translates into: HSET ("ServiceCache:Foo", theFoo.FooId, JsonSerialize(theFoo)); var myFoos = JsonDeserialize( HGET ("ServiceCache:FooIndex", theFoo.OwnerId) ); myFoos.Add(theFoo.OwnerId); HSET ("ServiceCache:FooIndex", theFoo.OwnerId, JsonSerialize(myFoos)); However, this is broken in two ways. Two concurrent operations can read/modify/write at the same time. The latter "wins" the final HSET and the former's index update is lost. Another operation could read the index in between the first and second lines. It would miss a Foo that it should find. So how do I index properly? I think I could use a Redis set instead of a json-encoded value for the index. That would solve part of the problem since the "add-to-index-if-not-already-present" would be atomic. I also read about using MULTI as a "transaction" but it doesn't seem like it does what I want. Am I right that I can't really MULTI; HGET; {update}; HSET; EXEC since it doesn't even do the HGET before I issue the EXEC? I also read about using WATCH and MULTI for optimistic concurrency, then retrying on failure. But WATCH only works on top-level keys. So it's back to SET/GET instead of HSET/HGET. And now I need a new index-like-thing to support getting all the values in a given cache. If I understand it right, I can combine all these things to do the job. Something like: while(!succeeded) { WATCH( "ServiceCache:Foo:" + theFoo.FooId ); WATCH( "ServiceCache:FooIndexByOwner:" + theFoo.OwnerId ); WATCH( "ServiceCache:FooIndexAll" ); MULTI(); SET ("ServiceCache:Foo:" + theFoo.FooId, JsonSerialize(theFoo)); SADD ("ServiceCache:FooIndexByOwner:" + theFoo.OwnerId, theFoo.FooId); SADD ("ServiceCache:FooIndexAll", theFoo.FooId); EXEC(); //TODO somehow set succeeded properly } Finally I'd have to translate this pseudocode into real code depending how my client library uses WATCH/MULTI/EXEC; it looks like they need some sort of context to hook them together. All in all this seems like a lot of complexity for what has to be a very common case; I can't help but think there's a better, smarter, Redis-ish way to do things that I'm just not seeing. How do I lock properly? Even if I had no indexes, there's still a (probably rare) race condition. A: HGET - cache miss B: HGET - cache miss A: SELECT B: SELECT A: HSET C: HGET - cache hit C: UPDATE C: HSET B: HSET ** this is stale data that's clobbering C's update. Note that C could just be a really-fast A. Again I think WATCH, MULTI, retry would work, but... ick. I know in some places people use special Redis keys as locks for other objects. Is that a reasonable approach here? Should those be top-level keys like ServiceCache:FooLocks:{Id} or ServiceCache:Locks:Foo:{Id}? Or make a separate hash for them - ServiceCache:Locks with subkeys Foo:{Id}, or ServiceCache:Locks:Foo with subkeys {Id} ? How would I work around abandoned locks, say if a transaction (or a whole server) crashes while "holding" the lock?

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • How to deploy App_Data files with Azure cloud service (web role)

    - by user2977157
    I have a read-only data file (for IP geolocation) that my web role needs to read. It is currently in the App_Data folder, which is NOT included in the deployment package for the cloud service. Unlike "web deploy", there is no checkbox for an azure cloud service deployment to include/exclude App_Data. Is there a reasonable way to get the deployment package to include the App_Data folder/files? Or is using Azure storage for this sort of thing the better way to go? (cost and performance wise) Am using Visual Studio 2013 and the Azure SDK 2.2

    Read the article

  • Make function declarations based on function definitions

    - by Clinton Blackmore
    I've written a .cpp file with a number of functions in it, and now need to declare them in the header file. It occurred to me that I could grep the file for the class name, and get the declarations that way, and it would've worked well enough, too, had the complete function declaration before the definition -- return code, name, and parameters (but not function body) -- been on one line. It seems to me that this is something that would be generally useful, and must've been solved a number of times. I am happy to edit the output and not worried about edge cases; anything that gives me results that are right 95% of the time would be great. So, if, for example, my .cpp file had: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) // optional user-supplied buffer { ... } and a number of other similar functions, getting this back: i2cstatus_t NXTI2CDevice::writeRegisters( uint8_t start_register, // start of the register range uint8_t bytes_to_write, // number of bytes to write uint8_t* buffer = 0) for inclusion in the header file, after a little editing, would be fine. Getting this back: i2cstatus_t writeRegisters( uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); or this: i2cstatus_t writeRegisters(uint8_t start_register, uint8_t bytes_to_write, uint8_t* buffer); would be even better.

    Read the article

  • How do you unit test new code that uses a bunch of classes that cannot be instantiated in a test har

    - by trendl
    I'm writing a messaging layer that should handle communication with a third party API. The API has a bunch of classes that cannot be easily (if at all) instantiated in a test harness. I decided to wrap each class that I need in my unit tests with an adapter/wrapper and expose the members I need through this adapter class. Often I need to expose the wrapped type as well which I do by exposing it as an object. I have also provided an interface for for each or the adapter classes to be able to use them with a mocking framework. This way I can substitute the classes in test for whatever I need. The downside is that I have a bunch of adapter classes that so far server no other reason but testing. For me this is a good reason by itself but others may find this not enough. Possibly, when I write an implementation for another third party vendor's API, I may be able to reuse much of my code and only provide the adapters specific to the vendor's API. However, this is a bit of a long shot and I'm not actually sure it will work. What do you think? Is this approach viable or am I writing unnecessary code that serves no real purpose? Let me say that I do want to write unit tests for my messaging layer and I do now know how to do it otherwise.

    Read the article

  • Unit testing a method with many possible outcomes

    - by Cthulhu
    I've built a simple~ish method that constructs an URL out of approximately 5 parts: base address, port, path, 'action', and a set of parameters. Out of these, only the address part is mandatory, the other parts are all optional. A valid URL has to come out of the method for each permutation of input parameters, such as: address address port address port path address path address action address path action address port action address port path action address action params address path action params address port action params address port path action params andsoforth. The basic approach for this is to write one unit test for each of these possible outcomes, each unit test passing the address and any of the optional parameters to the method, and testing the outcome against the expected output. However, I wonder, is there a Better (tm) way to handle a case like this? Are there any (good) unit test patterns for this? (rant) I only now realize that I've learned to write unit tests a few years ago, but never really (feel like) I've advanced in the area, and that every unit test is a repeat of building parameters, expected outcome, filling mock objects, calling a method and testing the outcome against the expected outcome. I'm pretty sure this is the way to go in unit testing, but it gets kinda tedious, yanno. Advice on that matter is always welcome. (/rant) (note) christmas weekend approaching, probably won't reply to suggestions until next week. (/note)

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >