Search Results

Search found 11573 results on 463 pages for 'store'.

Page 402/463 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • php - using file() incrementally?

    - by NeedBeerStat
    I'm not sure if this is possible, I've been googling for a solution... But, essentially, I have a very large file, the lines of which I want to store in an array. Thus, I'm using file(), but is there a way to do that in batches? So that every,say, 100 lines it creates, it "pauses"? I think there's likely to be something I can do with a foreach loop or something, but I'm not sure that I'm thinking about it the right way... Like $i=0; $j=0; $throttle=100; foreach($files as $k => $v) { if($i < $j+$throttle && $i > $j) { $lines[] = file($v); //Do some other stuff, like importing into a db } $i++; $j++; } But, I think that won't really work because $i & $j will always be equal... Anyway, feeling muddled... Can someone help me think a lil' clearer?

    Read the article

  • Convert asp.net application to windows forms app

    - by rogdawg
    I have written and deployed an ASP.NET application that is pretty complex. It uses XSL transformations to create web forms for a large variety of data objects. The data comes from the database as XML via a web service. Now, I need to create a Windows desktop application that will provide a small subset of the web applications functionality to a user who may not have access to the web (working in remote areas). I will provide the data syncing using the MS Sync Framework. And I will have the desktop use a local data store. I would like to use the same xslt files in the desktop app that I use in the web app for the form creation so that, if changes are made, the desktop app can update itself when it connects and syncs its data. But, I am wondering how to replicate the asp.net codebehind logic of my web app in the windows forms. If I use a browser control to render the XSLTransformation result, then how could I handle click events, etc, in the form? Also, can I launch other windows as "dialog boxes" from my windows forms (I do this in my web app using RadControls functionality)? Thanks for any advice you can give.

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Storing large numbers of varying size objects on disk

    - by Foredecker
    I need to develop a system for storing large numbers (10's to 100's of thousands) of objects. Each object is email-like - there is a main text body, and several ancillary text fields of limited size. A body will be from a few bytes, to several KB in size. Each item will have a single unique ID (probably a GUID) that identifies it. The store will only be written to when an object is added to it. It will be read often. Deletions will be rare. The data is almost all human readable text so it will be readily compressible. A system that lets me issue the I/Os and mange the memory and caching would be ideal. I'm going to keep the indexes in memory, using it to map indexes to the single (and primary) key for the objects. Once I have the key, then I'll load it from disk, or the cache. The data management system needs to be part of my application - I do not want to depend on OS services. Or separately installed packages. Native (C++) would be best, but a manged (C#) thing would be ok. I believe that a database is an obvious choice, but this needs to be super-fast for look up and loading into memory of an object. I am not experienced with data base tech and I'm concerned that general relational systems will not handle all this variable sized data efficiently. (Note, this has nothing to do with my job - its a personal project.) In your experience, what are the viable alternatives to a traditional relational DB? Or would a DB work well for this?

    Read the article

  • C++ and its type system: How to deal with data with multiple types?

    - by sub
    "Introduction" I'm relatively new to C++. I went through all the basic stuff and managed to build 2-3 simple interpreters for my programming languages. The first thing that gave and still gives me a headache: Implementing the type system of my language in C++ Think of that: Ruby, Python, PHP and Co. have a lot of built-in types which obviously are implemented in C. So what I first tried was to make it possible to give a value in my language three possible types: Int, String and Nil. I came up with this: enum ValueType { Int, String, Nil }; class Value { public: ValueType type; int intVal; string stringVal; }; Yeah, wow, I know. It was extremely slow to pass this class around as the string allocator had to be called all the time. Next time I've tried something similar to this: enum ValueType { Int, String, Nil }; extern string stringTable[255]; class Value { public: ValueType type; int index; }; I would store all strings in stringTable and write their position to index. If the type of Value was Int, I just stored the integer in index, it wouldn't make sense at all using an int index to access another int, or? Anyways, the above gave me a headache too. After some time, accessing the string from the table here, referencing it there and copying it over there grew over my head - I lost control. I had to put the interpreter draft down. Now: Okay, so C and C++ are statically typed. How do the main implementations of the languages mentioned above handle the different types in their programs (fixnums, bignums, nums, strings, arrays, resources,...)? What should I do to get maximum speed with many different available types? How do the solutions compare to my simplified versions above?

    Read the article

  • Holding variables in memory, C++

    - by b-gen-jack-o-neill
    Today something strange came to my mind. When I want to hold some string in C (C++) the old way, without using string header, I just create array and store that string into it. But, I read that any variable definition in C in local scope of function ends up in pushing these values onto the stack. So, the string is actually 2* bigger than needed. Because first, the push instructions are located in memory, but then when they are executed (pushed onto the stack) another "copy" of the string is created. First the push instructions, than the stack space is used for one string. So, why is it this way? Why doesn't compiler just add the string (or other variables) to the program instead of creating them once again when executed? Yes, I know you cannot just have some data inside program block, but it could just be attached to the end of the program, with some jump instruction before. And than, we would just point to these data? Because they are stored in RAM when the program is executed. Thanks.

    Read the article

  • C# XML parsing with LINQ storing directly to a struct?

    - by Luke
    Say I have the following XML schema: <root> <version>2.0</version> <type>fiction</type> <chapters> <chapter>1</chapter> <title>blah blah</title> </chapter> <chapters> <chapter>2</chapter> <title>blah blah</title> </chapters> </root> Would it be possibly to parse the elements which I know will not be repeated in the XML and store them directly into the struct using LINQ? For example, could I do something like this for "version" and "type" //setup structs Book book = new Book(); book.chapter = new Chapter(); //use LINQ to parse the xml var bookData = from b in xmlDoc.Decendants("root") select new { book.version = b.Element("version").Value, book.type = b.Element("type").Value }; //then for "chapters" since I know there are multiple I can do: var chapterData = from c in xmlDoc.Decendants("root" select new { chapter = c.Element("chapters") }; foreach (var ch in chapterData) { book.chapter.Add(getChapterData(ch.chapter)); }

    Read the article

  • Div's are not filtered as :hidden when display:none; is appended as style

    - by CodeMonkey
    Hey folks I have some simple HTML: <div id="selectorContainer"> <div id="chainedSelector" style="display: none;"><% Html.RenderPartial("ProjectSuggest/ChainedProjectSelector"); %></div> <div id="suggestSelector"><% Html.RenderPartial("ProjectSuggest/SuggestControl", new SuggestModeDTO{RegistrationMode = Model.RegistrationMode}); %></div> </div> which is two containers for controls. I have jQuery code to toggle between displaying these, but I need to store as a cookie which one was used last time the user was logged in (i.e. which one was visible). The storing of the cookie is not the problem. The problem is that I for some reason am not able to detect which one is the hidden one, using .is(":hidden"), and not able to detect which one is visible using .is(":visible") When I use those two selectors, I always get both. "true" and "true" for both, eventhough one has display: none; and the other doesn't. Please note that they are NOT placed inside a hidden container which otherwise would hide both, so there are not any hidden ancestor containers. Can anyone maybe explain why this could happen? jQuery code containing source for getting the Id's and for getting the selected one (which currently is broken): getChainedSelectorId: function() { return "#chainedSelector"; }, getSuggestSelectorId: function() { return "#suggestSelector"; }, getSelectedSelector: function() { alert($(this.getChainedSelectorId()).is(":hidden")); alert($(this.getSuggestSelectorId()).is(":hidden")); var selected = ($(this.getChainedSelectorId()).is(":visible") ? this.getChainedSelectorId() : this.getSuggestSelectorId()); alert(selected); return selected; }, Thanks in advance.

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Encrypting a file in win API

    - by Kristian
    hi I have to write a windows api code that encrypts a file by adding three to each character. so I wrote this now its not doing anything ... where i go wronge #include "stdafx.h" #include <windows.h> int _tmain(int argc, _TCHAR* argv[]) { HANDLE filein,fileout; filein=CreateFile (L"d:\\test.txt",GENERIC_READ,0,NULL,OPEN_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); fileout=CreateFile (L"d:\\test.txt",GENERIC_WRITE,0,NULL,CREATE_ALWAYS,FILE_ATTRIBUTE_NORMAL,NULL); DWORD really; //later this will be used to store how many bytes I succeed to read do { BYTE x[1024]; //the buffer the thing Im using to read in ReadFile(filein,x,1024,&really,NULL); for(int i=0 ; i<really ; i++) { x[i]= (x[i]+3) % 256; } DWORD really2; WriteFile(fileout,x,really,&really2,NULL); }while(really==1024); CloseHandle(filein); CloseHandle(fileout); return 0; } and if Im right how can i know its ok

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • Twitter API similar to Google Alert

    - by Felix Perdana
    I am trying to create a web application which have a similar functionality with Google Alerts. (by similar I mean, the user can provide their email address for the alert to be sent to, daily or hourly) The only limitation is that it only gives alerts to user based on a certain keyword or hashtag. I think that I have found the fundamental API needed for this web application. https://dev.twitter.com/docs/api/1/get/search The problem is I still don't know all the web technologies needed for this application to work properly. For example, Do I have to store all of the searched keywords in database? Do I have to keep pooling ajax request all the time in order to keep my database updated? What if the keyword the user provided is very popular right now that might have thousands of tweets just in an hour (not to mention, there might be several emails that request several trending topics)? By the way, I am trying to build this application using PHP. So please let me know, what kind of techniques I need to learn for such web app (and some references maybe)? Any kind of help will be appreciated. Thanks in advance :) Regards, Felix Perdana

    Read the article

  • Another boost error

    - by user1676605
    On this code I get the enourmous error static void ParseTheCommandLine(int argc, char *argv[]) { int count; int seqNumber; namespace po = boost::program_options; std::string appName = boost::filesystem::basename(argv[0]); po::options_description desc("Generic options"); desc.add_options() ("version,v", "print version string") ("help", "produce help message") ("sequence-number", po::value<int>(&seqNumber)->default_value(0), "sequence number") ("pem-file", po::value< vector<string> >(), "pem file") ; po::positional_options_description p; p.add("pem-file", -1); po::variables_map vm; po::store(po::command_line_parser(argc, argv). options(desc).positional(p).run(), vm); po::notify(vm); if (vm.count("pem file")) { cout << "Pem files are: " << vm["pem-file"].as< vector<string> >() << "\n"; } cout << "Sequence number is " << seqNumber << "\n"; exit(1); ../../../FIXMarketDataCommandLineParameters/FIXMarketDataCommandLineParameters.hpp|98|error: no match for ‘operator<<’ in ‘std::operator<< [with _Traits = std::char_traits](((std::basic_ostream &)(& std::cout)), ((const char*)"Pem files are: ")) << ((const boost::program_options::variable_value*)vm.boost::program_options::variables_map::operator[](((const std::string&)(& std::basic_string, std::allocator (((const char*)"pem-file"), ((const std::allocator&)((const std::allocator*)(& std::allocator()))))))))-boost::program_options::variable_value::as with T = std::vector, std::allocator , std::allocator, std::allocator ’|

    Read the article

  • Is there a standard SQL Table design for overriding 'big picture' default values with lower level de

    - by RichardHowells
    Here's an example. Suppose we are trying to calculate a service charge. Say sales in the USA attract a 10 dollar charge, sales in the UK attract a 20 dollar charge So far it's easy - we are starting to imagine a table that lists charges by country. Now lets assume that Alaska and Hawaii are treated as special cases they are both 15 dollars That suggests a table with states, Alaska and Hawaii are charged at 15, but presumably we need 48 (redundant) rows all saying 10. This gives us a maintainance problem, our user only wants to type 10 once NOT 48 times. It does not sit well with the UK either. The UK does not have states. Suppose we throw in another couple of cross cutting rules. If you order by phone there is a 10% supplement on the charge. If you order via the web there is a 10% discount. But for some reason best known to the owners of the business the web/phone supplement/discount are not applied in Hawaii. It seems to me that this is quite a common kind of problem and there is probably a well known arrangement of tables to store the data. Most cases get handled by broad brush answers, but there are some very detailed low level variations that give rise to a huge number of theoretical combinations, most of which are not used.

    Read the article

  • Picture.writeToStream() not writing out all bitmaps

    - by quickdraw mcgraw
    I'm using webview.capturePicture() to create a Picture object that contains all the drawing objects for a webpage. I can successfully render this Picture object to a bitmap using the canvas.drawPicture(picture, dst) with no problems. However when I use picture.writeToStream(fos) to serialize the picture object out to file, and then Picture.createFromStream(fis) to read the data back in and create a new picture object, the resultant bitmap when rendered as above is missing any larger images (anything over around 20KB! by observation). This occurs on all the Android OS platforms that I have tested 1.5, 1.6 and 2.1. Looking at the native code for Skia which is the underlying Android graphics library and the output file produced from the picture.writeToStream() I can see how the file format is constructed. I can see that some of the images in this Skia spool file are not being written out (the larger ones), the code that appears to be the problem is in skBitmap.cpp in the method void SkBitmap::flatten(SkFlattenableWriteBuffer& buffer) const; It writes out the bitmap fWidth, fHeight, fRowBytes, FConfig and isOpaque values but then just writes out SERIALIZE_PIXELTYPE_NONE (0). This means that the spool file does not contain any pixel information about the actual image and therefore cannot restore the picture object correctly. Effectively this renders the writeToStream and createFromStream() APIs useless as they do not reliably store and recreate the picture data. Has anybody else seen this behaviour and if so am I using the API incorrectly, can it be worked around, is there an explanation i.e. incomplete API / bug and if so are there any plans for a fix in a future release of Android? Thanks in advance.

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • JSP includes and MVC pattern

    - by xingyu
    I am new to JSP/Servlets/MVC and am writing a JSP page (using Servlets and MVC pattern) that displays information about recipies, and want the ability for users to "comment" on it too. So for the Servlet, on doGet(), it grabs all the required info into a Model POJO and forwards the request on to a JSP View for rendering. That is working just fine. I'd like the "comment" part to be a separate JSP, so on the RecipeView.jsp I can use to separate these views out. So I've made that, but am now a little stuck. The form in the CommentOnRecipe.jsp posts to a CommentAction servlet that handles the recording of the comment just fine. So when I reload the Recipe page, I can see the comment I just made. I'd like to: Reload the page automatically after commenting (no AJAX for now) Block the user from making more than one comment on each Recipe over a 1 day timeframe (via a Cookie). So I store a cookie indicating the product ID whenever the user makes a comment, so we can check this later? How would it work in a MVC context? Show a message to the user that they have already commented on the Recipe when they visit one which they have commented on I'm confused about using beans/including JSPs etc on how to achieve this. I know in ASP.NET land, it would be a UseControl that I would place on a page, or in ASP.NET MVC, it would be a PartialView of some sort. I'm just confused with the way this works in a JSP/Servlets/MVC context.

    Read the article

  • Pass the return type as a parameter in java?

    - by jonderry
    I have some files that contain logs of objects. Each file can store objects of a different type, but a single file is homogeneous -- it only stores objects of a single type. I would like to write a method that returns an array of these objects, and have the array be of a specified type (the type of objects in a file is known and can be passed as a parameter). Roughly, what I want is something like the following: public static <T> T[] parseLog(File log, Class<T> cls) throws Exception { ArrayList<T> objList = new ArrayList<T>(); FileInputStream fis = new FileInputStream(log); ObjectInputStream in = new ObjectInputStream(fis); try { Object obj; while (!((obj = in.readObject()) instanceof EOFObject)) { T tobj = (T) obj; objList.add(tobj); } } finally { in.close(); } return objList.toArray(new T[0]); } The above code doesn't compile (there's an error on the return statement, and a warning on the cast), but it should give you the idea of what I'm trying to do. Any suggestions for the best way to do this?

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • MYSQL inserting records form table A into tables B and C (linked by foreign key) depending on column values in table A

    - by Chez
    Hi All, Have been searching high and low for a simple solution to a mysql insert problem. The problem is as follows: I am putting together an organisational database consisting of departments and desks. A department may or may not have n number of desks. Both departments and desks have their own table linked by a foreign key in desks to the relevant record in departments (i.e. the pk). I have a temporary table which I use to place all new department data (n records long)...In this table n number of desk records for a department follow the department record directly below. In the TEMP table, if a column department_name has a value,it is a department, if it doesn't it will have a value for the column desk and therefore will be a desk which is related to the above department. As I said there maybe several desk records until you get to the next department record. Ok, so what I want to do is the following: Insert the departments into the departments table and its desks into the desks table , generating a foreign key in the desk record to the relevant departments id. In pseudo-ish code: for each record in TEMP table if Department INSERT the record into Departments get the id of the newly created Department record and store it somewhere else if Desk INSERT the desk into the desks table with the relevant departments id as the foreignkey note once again that all departments desks directly follow the department in the TEMP Table Many Thanks

    Read the article

  • Transferring HashMap between client and server using Sockets (JAVA)

    - by sar
    I am working on a JAVA project in which there are multiple terminals. These terminals act as client and servers. For example if there are 3 terminals A,B and C.Then at any given point in time one of them say A, will be a client making broadcast request. The other two terminals, B and C, will reply. I am using sockets to make them communicate. Once the reply is received from all the other terminals A will check the pool of channels to see if any one of the channel is free. It takes up the free channel and making it availabilty false. The channelpool is implemented using HashMAp: HashMap channelpool = new HashMap(); channelpool = 1=true, 2=true, 3=false, 4=true, 5=true, 6=true, 7=true, 8=true, 9=true, 10=true So initially all the channels are true, any terminal can take any channel. But once the channel is taken it is set to false for the period of use and then reset to true. Now this Hashmap has to be shared among the distributed terminals. Also it should be kept up to date. I can not used a shared resource among the terminals to store the HashMap.Can someone tell me an easy way to transfer the HashMap between the terminals. I will appreciate if someone could point me to a website which discusses this.

    Read the article

  • Serialization for memcached

    - by Ram
    I have this huge domain object(say parent) which contains other domain objects. It takes a lot of time to "create" this parent object by querying a DB (OK we are optimizing the DB). So we decided to cache it using memcached (with northscale to be specific) So I have gone through my code and marked all the classes (I think) as [Serializable], but when I add it to the cache, I see a Serialization Exception getting thrown in my VS.net output window. var cache = new NorthScaleClient("MyBucket"); cache.Store(StoreMode.Set, key, value); This is the exception: A first chance exception of type 'System.Runtime.Serialization.SerializationException' occurred in mscorlib.dll SO my guess is, I have not marked all classes as [Serializable]. I am not using any third party libraries and can mark any class as [Serializable], but how do I find out which class is failing when the cache is trying to serialize the object ? Edit1: casperOne comments make me think. I was able to cache these domain object with Microsoft Cache Application Block without marking them [Serializable], but not with NorthScale memcached. It makes me think that there might be something to do with their implementation, but just out of curiosity, am still interested in finding where it fails when trying to add the object to memcached

    Read the article

  • Is Storing Cookies in a Database Safe?

    - by viatropos
    If I use mechanize, I can, for instance, create a new google analytics profile for a website. I do this by programmatically filling out the login form and storing the cookies in the database. Then, for at least until the cookie expires, I can access my analytics admin panel without having to enter my username and password again. Assuming you can't create a new analytics profile any other way (with OpenAuth or any of that, I don't think it works for actually creating a new Google Analytics profile, the Analytics API is for viewing the data, but I need to create an new analytics profile), is storing the cookie in the database a bad thing? If I do store the cookie in the database, it makes it super easy to programatically login to Google Analytics without the user ever having to go to the browser (maybe the app has functionality that says "user, you can schedule a hook that creates a new anaytics profile for each new domain you create, just enter your credentials once and we'll keep you logged in and safe"). Otherwise I have to keep transferring around emails and passwords which seems worse. So is storing cookies in the database safe?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >