Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 194/217 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • XML: what processing rules apply for values intertwined with tags?

    - by iCE-9
    I've started working on a simple XML pull-parser, and as I've just defuzzed my mind on what's correct syntax in XML with regards to certain characters/sequences, ignorable whitespace and such (thank you, http://www.w3schools.com/xml/xml_elements.asp), I realized that I still don't know squat about what can be sketched up as the following case (which Validome finds well-formed very much; note that I only want to use xml files for data storage, no entities, DTD or Schemas needed): <bookstore> <book id="1"> <author>Kurt Vonnegut Jr.</author> <title>Slapstick</title> </book> We drop a pie here. <book id="2">Who cares anyway? <author>Stephen King</author> <title>The Green Mile</title> </book> And another one here. <book id="3"> <author>Next one</author> <title>This time with its own title</title> </book> </bookstore> "We drop a pie here." and "And another one here." are values of the 'bookstore' element. "Who cares anyway?" is a value related to the second 'book' element. How are these processed, if at all? Will "We drop a pie here." and "Another one here." be concatenated to form one value for the 'bookstore' element, or are they treated separately, stored somewhere, affecting the outcome of the parsing of the element they belong to, or...?

    Read the article

  • Creating spotlight in OpenGL scene

    - by Victor Oliveira
    Im studying OpenGL and trying to create a spot light at my application. The code that Im using for my #vertex-shader is below: #:vertex-shader #{ #version 150 core in vec3 in_pos; in vec2 in_tc; out vec2 tc; glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 20.0f); GLfloat spot_direction[] = { -1.0, -1.0, 0.0 }; glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spot_direction); glEnable(GL_LIGHT0); void main() { vec4 pos= vec4(vec3(1.0)*in_pos - vec3(1.0), 1.0); pos.z=0.0; gl_Position = pos; tc = in_tc; } } The thing is, everytime Im trying to run the code an Error that says: Type: other, Source: api, ID: 131169, Severity: low Message: Framebuffer detailed info: The driver allocated storage for renderbuffer 1. len = 157, written = 0 failed to compile vertex shader of deferred: directional info log for shader deferred: directional vertex info log for shader deferred: directional: ERROR: Unbound variable: when Specifications: Renderer: GeForce GTX 580/PCIe/SSE2 Version: 3.3.0 NVIDIA 319.17 GLSL: 3.30 NVIDIA via Cg compiler Status: Using GLEW 1.9.0 1024 x 768 OS: Linux debian I guess to create this spotlight is pretty much simple, but since Im really new to OpenGL I dont have a clue how to do it until now, even reading sources like: http://www.glprogramming.com/red/chapter05.html#name3 Read also in some place that light spots can get really hard to understand, but I cant avoid this step right now since Im following my lecture schedule. Could anybody help me?

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • What is the fastest collection in c# to implement a prioritizing queue?

    - by Nathan Smith
    I need to implement a queue for messages on a game server so it needs to as fast as possible. The queue will have a maxiumem size. I need to prioritize messages once the queue is full by working backwards and removing a lower priority message (if one exists) before adding the new message. The appliation is asynchronous so access to the queue needs to be locked. I'm currently implementing it using a LinkedList as the underlying storage but have concerns that searching and removing nodes will keep it locked for too long. Heres the basic code I have at the moment: public class ActionQueue { private LinkedList<ClientAction> _actions = new LinkedList<ClientAction>(); private int _maxSize; /// <summary> /// Initializes a new instance of the ActionQueue class. /// </summary> public ActionQueue(int maxSize) { _maxSize = maxSize; } public int Count { get { return _actions.Count; } } public void Enqueue(ClientAction action) { lock (_actions) { if (Count < _maxSize) _actions.AddLast(action); else { LinkedListNode<ClientAction> node = _actions.Last; while (node != null) { if (node.Value.Priority < action.Priority) { _actions.Remove(node); _actions.AddLast(action); break; } } } } } public ClientAction Dequeue() { ClientAction action = null; lock (_actions) { action = _actions.First.Value; _actions.RemoveFirst(); } return action; } }

    Read the article

  • Is there a Boost (or other common lib) type for matrices with string keys?

    - by mohawkjohn
    I have a dense matrix where the indices correspond to genes. While gene identifiers are often integers, they are not contiguous integers. They could be strings instead, too. I suppose I could use a boost sparse matrix of some sort with integer keys, and it wouldn't matter if they're contiguous. Or would this still occupy a great deal of space, particularly if some genes have identifiers that are nine digits? Further, I am concerned that sparse storage is not appropriate, since this is an all-by-all matrix (there will be a distance in each and every cell, provided the gene exists). I'm unlikely to need to perform any matrix operations (e.g., matrix multiplication). I will need to pull vectors out of the matrix (slices). It seems like the best type of matrix would be keyed by a Boost unordered_map (a hash map), or perhaps even simply an STL map. Am I looking at this the wrong way? Do I really need to roll my own? I thought I saw such a class somewhere before. Thanks!

    Read the article

  • error C3662: override specifier 'new' only allowed on member functions of managed classes

    - by William
    Okay, so I'm trying to override a function in a parent class, and getting some errors. here's a test case #include <iostream> using namespace std; class A{ public: int aba; void printAba(); }; class B: public A{ public: void printAba() new; }; void A::printAba(){ cout << "aba1" << endl; } void B::printAba() new{ cout << "aba2" << endl; } int main(){ A a = B(); a.printAba(); return 0; } And here's the errors I'm getting: Error 1 error C3662: 'B::printAba' : override specifier 'new' only allowed on member functions of managed classes c:\users\test\test\test.cpp 12 test Error 2 error C2723: 'B::printAba' : 'new' storage-class specifier illegal on function definition c:\users\test\test\test.cpp 19 test How the heck do I do this?

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Should I persist images on EBS or S3?

    - by javanes
    I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing MySql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind: Save uploaded images to EBS volume. Use the S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Can I perform a search on mail server in Java?

    - by twofivesevenzero
    I am trying to perform a search of my gmail using Java. With JavaMail I can do a message by message search like so: Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); Session session = Session.getDefaultInstance(props, null); Store store = session.getStore("imaps"); store.connect("imap.gmail.com", "myUsername", "myPassword"); Folder inbox = store.getFolder("Inbox"); inbox.open(Folder.READ_ONLY); SearchTerm term = new SearchTerm() { @Override public boolean match(Message mess) { try { return mess.getContent().toString().toLowerCase().indexOf("boston") != -1; } catch (IOException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } catch (MessagingException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } return false; } }; Message[] searchResults = inbox.search(term); for(Message m:searchResults) System.out.println("MATCHED: " + m.getFrom()[0]); But this requires downloading each message. Of course I can cache all the results, but this becomes a storage concern with large gmail boxes and also would be very slow (I can only imagine how long it would take to search through gigabytes of text...). So my question is, is there a way of searching through mail on the server, a la gmail's search field? Maybe through Microsoft Exchange? Hours of Googling has turned up nothing.

    Read the article

  • Attachment_fu file saving problem

    - by Anand
    Attachment_fu plugin is kind of old, but I have to modify an old app and I can't use another plugin like paperclip etc. So here's the code without further ado Submissions table structure --------------------------- | content_type | varchar(255) | YES | | NULL | filename | varchar(255) | YES | | NULL app/models/submission.rb ------------------------ has_attachment :storage => :file_system, :path_prefix => 'public/submissions', :max_size => 2.megabytes, :content_type => ['application/pdf', 'application/msword', 'text/plain'] app/models/user.rb ------------------ has_one :submission, :dependent => :destroy app/views/user/some_action.html.erb ----------------------------------- <% form_for :user, :url => { :action => "some_action" }, :html => {:multipart => true} do |f| %> .... <%= file_field_tag "submission[uploaded_data]" %> <%end%> app/controllers/user_controller.rb ---------------------------------- @user = User.find_user(session[:user_id]) @submission = @user.submission if request.post? @submission.uploaded_data = params[:submission][:uploaded_data] end When the form is submitted, the database fields "content_type" and "filename" get updated and display the correct values, but the file does not appear in public/submissions/ directory. I have checked the permissions on the submissions directory. What am I missing? Many Thanks

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • How to Make a DVCS Completely Interoperable with Subversion?

    - by David M
    What architectural changes would a DVCS need to be completely interoperable with Subversion? Many DVCSs have some kind of bidirectional interface with Subversion, but there are limitations and caveats. For instance, git-svn can create a repository that mirrors Subversion, and changes to that repo can be sent back to Subversion via 'dcommit'. But the git-svn manpage explicitly cautions against making clones of that repository, so essentially, it's a Subversion working copy that you can use git commands on. Bazaar has a bidirectional Subversion capability too, but its documentation notes that Subversion properties aren't supported at all. Here's the end that I'm pursuing. I want a Subversion repository and a DVCS repository that, in the steady state, have identical content. When something is changed on one, it's automatically mirrored to the other. Subversion users interact with the Subversion repository normally. DVCS users clone the DVCS repository, pull changes from it, and push changes back to it. Most importantly, they don't need to know that this special DVCS repository is associated with a Subversion repository. It would probably be nifty if any clone of the special repository is itself a special repository and could commit directly to Subversion, but it might be sufficient if only the special repository directly interacts with Subversion. I think that's what mostly needed is to improve the bidirectional capability so that changes to Subversion properties are translated to changes in the DVCS repository. Some changes in the DVCS repository would be translated to changes to Subversion properties. Or is the answer to create a new capability in Subversion that interacts with a DVCS repository, using the DVCS repository as just a special storage layer such as fsfs or bdb? If there's not a direct mapping between the things that Subversion and a DVCS regard as having versions, does that imply that there's always going to be some activity that cannot be recorded properly on one or the other?

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • Switching textstorage of NSTextViews back and forth

    - by Jakob Dam Jensen
    I'm trying to make a feature in a product which gives the user the ability to split a textview into two. The way this is done is by removing the textview from it's superview, making a NSSplitView and adding the textview as well as a new NSTextView instance to this splitview. Lastly I make these two textviews share the same textstorage in order to make them share the same content. It works great. But the problem is when I want to make one of the two textviews change textstorage. The replaceTextStorage method in NSLayoutManager causes both NSTextView to change textStorage. The API documentation states: replaceTextStorage: All NSLayoutManager objects sharing the original NSTextStorage object then share the new one. This method makes all the adjustments necessary to keep these relationships intact, unlike setTextStorage:. So it makes sense that it would do this. But the question is how do I make it possible to have two (or more) textviews first share the same storage and after that having them using their own? I've tried replacing the layoutManager and even making new instances of NSTextViews but no luck... Any suggestions?

    Read the article

  • AJAX Autosave

    - by antony.trupe
    What's the best javascript library, or plugin or extension to a library, that has implemented autosaving functionality? The specific need is to be able to 'save' a data grid. Think gmail and Google Documents' autosave. I don't want to reinvent the wheel if its already been invented. I'm looking for an existing implementation of the magical autoSave() function. Auto-Saving:pushing to server code that saves to persistent storage, usually a DB. The server code framework is outside the scope of this question. Note that I'm not looking for an Ajax library, but a library/framework a level higher: interacts with the form itself. daemach introduced an implementation on top of jQuery @ http://ideamill.synaptrixgroup.com/?p=3. I'm not convinced it meets the lightweight and well engineered criteria though. Criteria stable, lightweight, well engineered saves onChange and/or onBlur saves no more frequently then a given number of milliseconds handles multiple updates happening at the same time doesn't save if no change has occurred since last save saves to different urls per input class Updates I've stabilized a solution. See my answer below for links.

    Read the article

  • pattern for the following condition in java

    - by zahir hussain
    hi i want to know how to write pattern.. for example : the word is "AboutGoogle AdWords Drive traffic and customers to your site. Pay through Cheque, Net Banking or Credit Card. Google Toolbar Add a search box to your browser. Google SMS To find out local information simply SMS to 54664. Gmail Free email with 7.2GB storage and less spam. Try Gmail today. Our ProductsHelp Help with Google Search, Services and ProductsGoogle Web Search Features Translation, I'm Feeling Lucky, CachedGoogle Services & Tools Toolbar, Google Web APIs, ButtonsGoogle Labs Ideas, Demos, ExperimentsFor Site OwnersAdvertising AdWords, AdSenseBusiness Solutions Google Search Appliance, Google Mini, WebSearchWebmaster Central One-stop shop for comprehensive info about how Google crawls and indexes websitesSubmit your content to Google Add your site, Google SitemapsOur CompanyPress Center News, Images, ZeitgeistJobs at Google Openings, Perks, CultureCorporate Info Company overview, Philosophy, Diversity, AddressesInvestor Relations Financial info, Corporate governanceMore GoogleContact Us FAQs, Feedback, NewsletterGoogle Logos Official Logos, Holiday Logos, Fan LogosGoogle Blog Insights to Google products and cultureGoogle Store Pens, Shirts, Lava lamps©2010 Google - Privacy Policy - Terms of Service" I have to search some word... for example "google insights" so how to write the code in java... i just write small code... check my code and answer my question... that code only use for find the search word, where is that. but i need to display some words front of search word and display some words rear of search workd... similar to google search... my code is Pattern p = Pattern.compile("(?i)(.*?)"+search+""); Matcher m = p.matcher(full); String title=""; while (m.find() == true) { title=m.group(1); System.out.println(title); } the full is orignal content, search s search word... thanks and advance

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >