Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 335/367 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Send post request from client to node.js

    - by Husar
    In order to learn node.js I have built a very simple guestbook app. There is basically just a comment form and a list of previous comments. Currently the app is client side only and the items are stored within local storage. What I want to do is send the items to node where I will save them using Mongo DB. The problem is I have not yet found a way to establish a connection to send data back and forth the client and node.js using POST requests. What I do now is add listeners to the request and wait for data I send: request.addListener('data', function(chunk) { console.log("Received POST data chunk '"+ chunk + "'."); }); On the client side I send the data using a simple AJAX request: $.ajax({ url: '/', type: 'post', dataType: 'json', data: 'test' }) This does not work at all in them moment. It could be that I don't know what url to place in the AJAX request 'url' parameter. Or the whole thing might just be the build using the wrong approach. I have also tried implementing the method described here, but also with no success. It would really help if anyone can share some tips on how to make this work (sending POST request from the client side to node and back) or share any good tutorials. thanks.

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Getting Error When Opening Files

    - by Nathan Campos
    I'm developing a simple Text Editor to understand better PocketC language, then I've done this: #include "\\Storage Card\\My Documents\\PocketC\\Parrot\\defines.pc" int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle = -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } Then I tried to run the application, it opens the Open File Dialog, I select a file(that is at \test.txt) that I've created with notepad, then I got my MessageBox saying that the file wans't found. Then I want to know why I'm getting this if the file is all correct? *PS: When I click to exit the MessageBox, I saw that the TextBox is displaying where the file is(I've tested with many other files, and with all I got the error and this).

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • Security implications of writing files using PHP

    - by susmits
    I'm currently trying to create a CMS using PHP, purely in the interest of education. I want the administrators to be able to create content, which will be parsed and saved on the server storage in pure HTML form to avoid the overhead that executing PHP script would incur. Unfortunately, I could only think of a few ways of doing so: Setting write permission on every directory where the CMS should want to write a file. This sounds like quite a bad idea. Setting write permissions on a single cached directory. A PHP script could then include or fopen/fread/echo the content from a file in the cached directory at request-time. This could perhaps be carried out in a Mediawiki-esque fashion: something like index.php?page=xyz could read and echo content from cached/xyz.html at runtime. However, I'll need to ensure the sanity of $_GET['page'] to prevent nasty variations like index.php?page=http://www.bad-site.org/malicious-script.js. I'm personally not too thrilled by the second idea, but the first one sounds very insecure. Could someone please suggest a good way of getting this done?

    Read the article

  • Managing/Storing complex application form

    - by mickyjtwin
    I am developing an online application form, which can be of two categories, either domestic/international. Each different type has commong questions, and also specific questions. They are approx 6 pages/steps of questions. The application form can also be saved at the end of each step, and can be completed at a later date. Some questions will vary depending on answers to previous questions, so there are some complex business rules. In terms of storage of results, would it be best to store the answers in and XML field in SQL? What would be the best way to reference a question to an answer. There is no need for the form to be dynamic in rendering questions etc. e.g. <Application id="123" type="Domestic"> <Answers> <EmailAddress>[email protected]</EmailAddress> <HomePhone>555-5555</HomePhone> <Suburb>MySuburb</Suburb> <Guardians> <Guardian type="Mother"> <FirstName>Mom</FirstName> </Guardian> </Guardians> <Answers> </Application>

    Read the article

  • If attacker has original data and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require storage in the database. I'm leaning towards the the one-way-hash-stored-in-the-db, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • XML: what processing rules apply for values intertwined with tags?

    - by iCE-9
    I've started working on a simple XML pull-parser, and as I've just defuzzed my mind on what's correct syntax in XML with regards to certain characters/sequences, ignorable whitespace and such (thank you, http://www.w3schools.com/xml/xml_elements.asp), I realized that I still don't know squat about what can be sketched up as the following case (which Validome finds well-formed very much; note that I only want to use xml files for data storage, no entities, DTD or Schemas needed): <bookstore> <book id="1"> <author>Kurt Vonnegut Jr.</author> <title>Slapstick</title> </book> We drop a pie here. <book id="2">Who cares anyway? <author>Stephen King</author> <title>The Green Mile</title> </book> And another one here. <book id="3"> <author>Next one</author> <title>This time with its own title</title> </book> </bookstore> "We drop a pie here." and "And another one here." are values of the 'bookstore' element. "Who cares anyway?" is a value related to the second 'book' element. How are these processed, if at all? Will "We drop a pie here." and "Another one here." be concatenated to form one value for the 'bookstore' element, or are they treated separately, stored somewhere, affecting the outcome of the parsing of the element they belong to, or...?

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • What is the fastest collection in c# to implement a prioritizing queue?

    - by Nathan Smith
    I need to implement a queue for messages on a game server so it needs to as fast as possible. The queue will have a maxiumem size. I need to prioritize messages once the queue is full by working backwards and removing a lower priority message (if one exists) before adding the new message. The appliation is asynchronous so access to the queue needs to be locked. I'm currently implementing it using a LinkedList as the underlying storage but have concerns that searching and removing nodes will keep it locked for too long. Heres the basic code I have at the moment: public class ActionQueue { private LinkedList<ClientAction> _actions = new LinkedList<ClientAction>(); private int _maxSize; /// <summary> /// Initializes a new instance of the ActionQueue class. /// </summary> public ActionQueue(int maxSize) { _maxSize = maxSize; } public int Count { get { return _actions.Count; } } public void Enqueue(ClientAction action) { lock (_actions) { if (Count < _maxSize) _actions.AddLast(action); else { LinkedListNode<ClientAction> node = _actions.Last; while (node != null) { if (node.Value.Priority < action.Priority) { _actions.Remove(node); _actions.AddLast(action); break; } } } } } public ClientAction Dequeue() { ClientAction action = null; lock (_actions) { action = _actions.First.Value; _actions.RemoveFirst(); } return action; } }

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Creating spotlight in OpenGL scene

    - by Victor Oliveira
    Im studying OpenGL and trying to create a spot light at my application. The code that Im using for my #vertex-shader is below: #:vertex-shader #{ #version 150 core in vec3 in_pos; in vec2 in_tc; out vec2 tc; glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 20.0f); GLfloat spot_direction[] = { -1.0, -1.0, 0.0 }; glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, spot_direction); glEnable(GL_LIGHT0); void main() { vec4 pos= vec4(vec3(1.0)*in_pos - vec3(1.0), 1.0); pos.z=0.0; gl_Position = pos; tc = in_tc; } } The thing is, everytime Im trying to run the code an Error that says: Type: other, Source: api, ID: 131169, Severity: low Message: Framebuffer detailed info: The driver allocated storage for renderbuffer 1. len = 157, written = 0 failed to compile vertex shader of deferred: directional info log for shader deferred: directional vertex info log for shader deferred: directional: ERROR: Unbound variable: when Specifications: Renderer: GeForce GTX 580/PCIe/SSE2 Version: 3.3.0 NVIDIA 319.17 GLSL: 3.30 NVIDIA via Cg compiler Status: Using GLEW 1.9.0 1024 x 768 OS: Linux debian I guess to create this spotlight is pretty much simple, but since Im really new to OpenGL I dont have a clue how to do it until now, even reading sources like: http://www.glprogramming.com/red/chapter05.html#name3 Read also in some place that light spots can get really hard to understand, but I cant avoid this step right now since Im following my lecture schedule. Could anybody help me?

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • Is there a Boost (or other common lib) type for matrices with string keys?

    - by mohawkjohn
    I have a dense matrix where the indices correspond to genes. While gene identifiers are often integers, they are not contiguous integers. They could be strings instead, too. I suppose I could use a boost sparse matrix of some sort with integer keys, and it wouldn't matter if they're contiguous. Or would this still occupy a great deal of space, particularly if some genes have identifiers that are nine digits? Further, I am concerned that sparse storage is not appropriate, since this is an all-by-all matrix (there will be a distance in each and every cell, provided the gene exists). I'm unlikely to need to perform any matrix operations (e.g., matrix multiplication). I will need to pull vectors out of the matrix (slices). It seems like the best type of matrix would be keyed by a Boost unordered_map (a hash map), or perhaps even simply an STL map. Am I looking at this the wrong way? Do I really need to roll my own? I thought I saw such a class somewhere before. Thanks!

    Read the article

  • error C3662: override specifier 'new' only allowed on member functions of managed classes

    - by William
    Okay, so I'm trying to override a function in a parent class, and getting some errors. here's a test case #include <iostream> using namespace std; class A{ public: int aba; void printAba(); }; class B: public A{ public: void printAba() new; }; void A::printAba(){ cout << "aba1" << endl; } void B::printAba() new{ cout << "aba2" << endl; } int main(){ A a = B(); a.printAba(); return 0; } And here's the errors I'm getting: Error 1 error C3662: 'B::printAba' : override specifier 'new' only allowed on member functions of managed classes c:\users\test\test\test.cpp 12 test Error 2 error C2723: 'B::printAba' : 'new' storage-class specifier illegal on function definition c:\users\test\test\test.cpp 19 test How the heck do I do this?

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Old desktop programmer wants to create S+S project

    - by Craig
    I have an idea for a product that I want to be web-based. But because I live in Brasil, the internet is not always available so there needs to be a desktop component that is available for when the internet is down. Also, I have been a SQL programmer, a desktop application programmer using dBase, VB and Pascal, and I have created simple websites using HTML and website creation tools, such as Frontpage. So from my research, I think I have the following options; PHP, Ruby on Rails, Python or .NET for the programming side. MySQL for the DB. And Apache, or possibly IIS, for the webserver. I will probably start with a local ISP provider for the cloud servce. But then maybe move to something more "robust" and universal in the future, ie. Amazon, or Azure, or something along that line. My question then is this. What would you recommend for something like this? I'm sure that I have not listed all of the possibilities, but the ones I have researched and thought of. Thanks everyone, Craig

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Can I perform a search on mail server in Java?

    - by twofivesevenzero
    I am trying to perform a search of my gmail using Java. With JavaMail I can do a message by message search like so: Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); Session session = Session.getDefaultInstance(props, null); Store store = session.getStore("imaps"); store.connect("imap.gmail.com", "myUsername", "myPassword"); Folder inbox = store.getFolder("Inbox"); inbox.open(Folder.READ_ONLY); SearchTerm term = new SearchTerm() { @Override public boolean match(Message mess) { try { return mess.getContent().toString().toLowerCase().indexOf("boston") != -1; } catch (IOException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } catch (MessagingException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } return false; } }; Message[] searchResults = inbox.search(term); for(Message m:searchResults) System.out.println("MATCHED: " + m.getFrom()[0]); But this requires downloading each message. Of course I can cache all the results, but this becomes a storage concern with large gmail boxes and also would be very slow (I can only imagine how long it would take to search through gigabytes of text...). So my question is, is there a way of searching through mail on the server, a la gmail's search field? Maybe through Microsoft Exchange? Hours of Googling has turned up nothing.

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >