Search Results

Search found 30270 results on 1211 pages for 'bart read'.

Page 364/1211 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • public class ImageHandler : IHttpHandler

    - by Ken
    cmd.Parameters.AddWithValue("@id", new system.Guid (imageid)); What using System reference would this require? Here is the handler: using System; using System.Collections.Specialized; using System.Web; using System.Web.Configuration; using System.Web.Security; using System.Globalization; using System.Configuration; using System.Data.SqlClient; using System.Data; using System.IO; using System.Web.Profile; using System.Drawing; public class ImageHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { string imageid; if (context.Request.QueryString["id"] != null) imageid = (context.Request.QueryString["id"]); else throw new ArgumentException("No parameter specified"); context.Response.ContentType = "image/jpeg"; Stream strm = ShowProfileImage(imageid.ToString()); byte[] buffer = new byte[8192]; int byteSeq = strm.Read(buffer, 0, 8192); while (byteSeq > 0) { context.Response.OutputStream.Write(buffer, 0, byteSeq); byteSeq = strm.Read(buffer, 0, 8192); } //context.Response.BinaryWrite(buffer); } public Stream ShowProfileImage(String imageid) { string conn = ConfigurationManager.ConnectionStrings["MyConnectionString1"].ConnectionString; SqlConnection connection = new SqlConnection(conn); string sql = "SELECT image FROM Profile WHERE UserId = @id"; SqlCommand cmd = new SqlCommand(sql, connection); cmd.CommandType = CommandType.Text; cmd.Parameters.AddWithValue("@id", new system.Guid (imageid));//Failing Here!!!! connection.Open(); object img = cmd.ExecuteScalar(); try { return new MemoryStream((byte[])img); } catch { return null; } finally { connection.Close(); } } public bool IsReusable { get { return false; } } }

    Read the article

  • CSV is actually .... Semicolon Separated Values ... (Excel export on AZERTY)

    - by Bugz R us
    I'm a bit confused here. When I use Excel 2003 to export a sheet to CSV, it actually uses semicolons ... Col1;Col2;Col3 shfdh;dfhdsfhd;fdhsdfh dgsgsd;hdfhd;hdsfhdfsh Now when I read the csv using Microsoft drivers, it expects comma's and sees the list as one big column ??? I suspect Excel is exporting with semicolons because I have a AZERTY keyboard. However, doesn't the CSV reader then also have to take in account the different delimiter ? How can I know the appropriate delimiter, and/or read the csv properly ?? public static DataSet ReadCsv(string fileName) { DataSet ds = new DataSet(); string pathName = System.IO.Path.GetDirectoryName(fileName); string file = System.IO.Path.GetFileName(fileName); OleDbConnection excelConnection = new OleDbConnection (@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + pathName + ";Extended Properties=Text;"); try { OleDbCommand excelCommand = new OleDbCommand(@"SELECT * FROM " + file, excelConnection); OleDbDataAdapter excelAdapter = new OleDbDataAdapter(excelCommand); excelConnection.Open(); excelAdapter.Fill(ds); } catch (Exception exc) { throw exc; } finally { if(excelConnection.State != ConnectionState.Closed ) excelConnection.Close(); } return ds; }

    Read the article

  • one-to-many with criteria question

    - by brnzn
    enter code hereI want to apply restrictions on the list of items, so only items from a given dates will be retrieved. Here are my mappings: <class name="MyClass" table="MyTable" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="myProp" type="string" column="prop"/> <list name="items" inverse="true" cascade="none"> <key column="myId"/> <list-index column="itemVersion"/> <one-to-many class="Item"/> </list> </class> <class name="Item" table="Items" mutable="false" > <cache usage="read-only"/> <id name="myId" column="myId" type="integer"/> <property name="itemVersion" type="string" column="version"/> <property name="startDate" type="date" column="startDate"/> </class> I tried this code: Criteria crit = session.createCriteria(MyClass.class); crit.add( Restrictions.eq("myId", new Integer(1))); crit = crit.createCriteria("items").add( Restrictions.le("startDate", new Date()) ); which result the following quires: select ... from MyTable this_ inner join Items items1_ on this_.myId=items1_.myId where this_.myId=? and items1_.startDate<=? followed by select ... from Items items0_ where items0_.myId=? But what I need is something like: select ... from MyTable this_ where this_.myId=? followed by select ... from Items items0_ where items0_.myId=? and items0_.startDate<=? Any idea how I can apply a criteria on the list of items?

    Read the article

  • C++ struct, public data members and inheritance

    - by Marius
    Is it ok to have public data members in a C++ class/struct in certain particular situations? How would that go along with inheritance? I've read opinions on the matter, some stated already here http://stackoverflow.com/questions/952907/practices-on-when-to-implement-accessors-on-private-member-variables-rather-than http://stackoverflow.com/questions/670958/accessors-vs-public-members or in books/articles (Stroustrup, Meyers) but I'm still a little bit in the shade. I have some configuration blocks that I read from a file (integers, bools, floats) and I need to place them into a structure for later use. I don't want to expose these externally just use them inside another class (I actually do want to pass these config parameters to another class but don't want to expose them through a public API). The fact is that I have many such config parameters (15 or so) and writing getters and setters seems an unnecessary overhead. Also I have more than one configuration block and these are sharing some of the parameters. Making a struct with all the data members public and then subclassing does not feel right. What's the best way to tackle that situation? Does making a big struct to cover all parameters provide an acceptable compromise (I would have to leave some of these set to their default values for blocks that do not use them)?

    Read the article

  • Creating android app Database with big amount of data

    - by Thomas
    Hi all, The database of my application need to be filled with a lot of data, so during onCreate(), it's not only some create table sql instructions, there is a lot of inserts. The solution I chose is to store all this instructions in a sql file located in res/raw and which is loaded with Resources.openRawResource(id). It works well but I face to encoding issue, I have some accentuated caharacters in the sql file which appears bad in my application. This my code to do this : public String getFileContent(Resources resources, int rawId) throws IOException { InputStream is = resources.openRawResource(rawId); int size = is.available(); // Read the entire asset into a local byte buffer. byte[] buffer = new byte[size]; is.read(buffer); is.close(); // Convert the buffer into a string. return new String(buffer); } public void onCreate(SQLiteDatabase db) { try { // get file content String sqlCode = getFileContent(mCtx.getResources(), R.raw.db_create); // execute code for (String sqlStatements : sqlCode.split(";")) { db.execSQL(sqlStatements); } Log.v("Creating database done."); } catch (IOException e) { // Should never happen! Log.e("Error reading sql file " + e.getMessage(), e); throw new RuntimeException(e); } catch (SQLException e) { Log.e("Error executing sql code " + e.getMessage(), e); throw new RuntimeException(e); } The solution I found to avoid this is to load the sql instructions from a huge static final string instead of a file, and all accentutated characters appears well. But Isn't there a more elegant way to load sql instructions than a big static final String attribute with all sql instructions ? Thanks in advance Thomas

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • Complex Quary on cassandra

    - by Sadiqur Rahman
    I have heard on cassandra database engine few days ago and searching for a good documentation on it. after studying on cassandra I got cassandra is more scalable than other data engine. I also read on Amazon SimpleDB but as SimpleDB has a limitation 10GB/table and Google Datastore is slower than Amazon SimpleDB, I prefer not to use them (Google Datastore, Amazon SimpleDB). So for making our site scaled specially high write rates with massive data, I like to use Cassandra as out Data Engine. But before starting using cassandra I am confused on "How to handle complex data using casssandra". I am giving you the MySQL database structure below, Please read this and give me a good suggestion. Users Table hasColum ID Primary hasColum email Unique hasColum FirstName hasColum LastName Category Table hasColum ID Primary hasColum Parent hasColum Category Posts Table hasColum ID Primary hasColum UID Index foreign key linked to users-ID hasColum CID Index foreign key linked to Category-ID hasColum Title hasColum Post Index hasColum PunDate Comments hasColum ID primary hasColum UID Index foreign key linked to users-ID hasColum PID Index foreign key linked to Posts-ID hasColum Comment User Group hasColum ID primary hasColum Name UserToGroup Table (for many to many relation only) hasColum UID foreign key linked to Users-ID hasColum GID foreign key linked to Group-ID Finally for your information, I like to use SimpleCassie PHP Class http://code.google.com/p/simpletools-php/ So, it will be very helpful if you can give me example using SimpleCassie

    Read the article

  • Fastest way to learn Flex and Java EE?

    - by LostWebNewbie
    Ok so me and my 2 friends have to make a webapp and well we think it's a good opportunity to learn JEE and Flex. The thing is we have very little knowledge about them and we have only 3 months to do it (it doesn't have to be super complicated). So my question is: what, in your opinion, would be the fastest way to learn them both? I guess we need to know some JSP, Serlvets, JPA (??), Flex, maybe JavaScript+CSS? Anything else like EJB? Should we also learn Spring (or Struts)? Obviously reading books would be a good idea, but I bet we won't make the deadline if we try to read all the books... @Edit: I know the basics of JSP/Servlets (read Head First JSP&Servlets) but I made only 1 project so far (a semi-decent hangman with JSP/Serlvets and JPA for persistance) that's about it. Flex - I'm just starting, I know really the basics of mxml and as3. As for why: 1) because we need to do a project for the uni and well I was thinking bout becoming a web dev (yup - jee+flex) after graduation, this is the perfect opportunity to learn them.

    Read the article

  • How to use libavformat for a separate encoder?

    - by Brendon Tsai
    I've build a encoder based on the sample of QUALCOMM, which captures the video and compresses it into h264 file. I am using Android 4.2.2. Now I want to add a mp4 muxer into this encoder(actually, just video will be fine, I don't need audio). I want to use FFMpeg. But after I read the example, I found out that the muxer was using the encoder of FFMpeg. I don't know how to use the muxer part for another encoder. I've read this post, but I don't understand how the code provide video stream to the muxer. I think that mainly because I don't understand these code: AVCodecContext * strmCodec = oFmtCtx->streams[0]->codec; // Fill the required properties for codec context. // *from the documentation: // *The user sets codec information, the muxer writes it to the output. // *Mandatory fields as specified in AVCodecContext // *documentation must be set even if this AVCodecContext is // *not actually used for encoding. my_tune_codec(strmCodec); Can anyone give me a hint? Thank you!

    Read the article

  • The youtube API sometimes throws error: Call to a member function children() on a non-object

    - by Anna Lica
    When i launch the php script, sometime works fine, but many other times it retrieve me this errror Fatal error: Call to a member function children() on a non-object in /membri/americanhorizon/ytvideo/rilevametadatadaurlyoutube.php on line 21 This is the first part of the code // set feed URL $feedURL = 'http://gdata.youtube.com/feeds/api/videos/dZec2Lbr_r8'; // read feed into SimpleXML object $entry = simplexml_load_file($feedURL); $video = parseVideoEntry($entry); function parseVideoEntry($entry) { $obj= new stdClass; // get nodes in media: namespace for media information $media = $entry->children('http://search.yahoo.com/mrss/'); //<----this is the doomed line 21 UPDATE: solution adopted for ($i=0 ; $i< count($fileArray); $i++) { // set feed URL $feedURL = 'http://gdata.youtube.com/feeds/api/videos/'.$fileArray[$i]; // read feed into SimpleXML object $entry = simplexml_load_file($feedURL); if ( is_object($entry)) { $video = parseVideoEntry($entry); echo ($video->description."|".$video->length); echo "<br>"; } else { $i--; } } In this mode i force the script to re-check the file that caused the error

    Read the article

  • Is there a way to specify a per-host deploy_to path with Capistrano?

    - by Chad Johnson
    I have searched and searched and asked a question already and have not received a clear answer. I have the following deploy script (snippet): set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain1.com", "domain2.com" role :app, "domain1.com", "domain2.com" role :db, "domain1.com", :primary => true, :norelease => true role :db, "domain2.com", :norelease => true As you see, I have set deploy_to to a specific path. And, I also have specified multiple web servers. However, each web server should have a different deployment path. I want to be able to run "cap deploy" and deploy to all hosts in one shot. I am NOT trying to deploy to staging and then to production. This is all production. My question is: how exactly do I specify a path per server? I have read the "Roles" documentation for Capistrano, and this is unclear. Can someone please post a deploy file example? I have read the documentation, and it is unclear how to do this. Does anyone know? Am I crazy? Am I thinking of this wrong or something? No answers anywhere online. Nowhere. Nothing. Please, someone help.

    Read the article

  • Compare system date with a date field in SQL

    - by JeT_BluE
    I am trying to compare a date record in SQL Server with the system date. In my example the user first register with his name and date of birth which are then stored in the database. The user than logs into the web application using his name only. After logging in, his name is shown on the side where it says "Welcome "player name" using Sessions. What I am trying to show in addition to his name is a message saying "happy birthday" if his date of birth matches the system date. I have tried working with System.DateTime.Now, but what I think is that it is also comparing the year, and what I really want is the day and the month only. I would really appreciate any suggestion or help. CODE In Login page: protected void Button1_Click(object sender, EventArgs e) { String name = TextBox1.Text; String date = System.DateTime.Today.ToShortDateString(); SqlConnection myconn2 = new SqlConnection(ConfigurationManager.ConnectionStrings["User"].ToString()); SqlCommand cmd2 = new SqlCommand(); SqlDataReader reader; myconn2.Open(); cmd2 = new SqlCommand("Select D_O_B from User WHERE Username = @username", myconn2); cmd2.Parameters.Add("@username", SqlDbType.NVarChar).Value = name; cmd2.Connection = myconn2 cmd2.ExecuteNonQuery(); reader = cmd2.ExecuteReader(); while (reader.Read().ToString() == date) { Session["Birthday"] = "Happy Birthday"; } } Note: I using the same reader in the code above this one, but the reader here is with a different connection. Also, reader.Read() is different than reader.HasRows? Code in Web app Page: string date = (string)(Session["Birthday"]); // Retrieving the session Label6.Text = date;

    Read the article

  • Agile methodologies. Is it a by-product of mind control techniques as NLP / Scientology?

    - by Bobb
    The more I read about contemporary methods combinging scrum, tdd and xp, the more I feel like I already seen the methods. I am not arguing that agile approach is much more progressive than older rigid structures like waterfall, what I am saying is that it seems to me that agile methodologies are ideal to be used as a nest for a brainwashing business. I read few articles which kept referring to authors which I checked afterwards and they call themselves - coaches, trainers (usual thing when NLP specialists are involved) with no apparent software development history. Also I met a guy who is a scrum faciltator (term widly used in relation to scientology) in a high profile company. I talked to him less than 5 min but I got complete feeling that he is either on drugs or he has been programmed by a powerful NLP specialist. The way to talk and his body movements witnessed he is not an average normal person (in terms of normal distribution :))... Please dont get me wrong. I am not a fun of conspiracy theories. But I had an experience with a member of church of scientology tried to invade a commercial firm and actually went half way through to very top in just 3 weeks. I saw his work. For now I have complete impression is that psycho manipulators are now invading IT industry through the convenient door of agile techniques. Anyone has the same feeling/thoughts?

    Read the article

  • FDs not closed in FUSE filesystem

    - by cor
    Hi, I have a problem while implementing a fuse filesystem in python. for now i just have a proxy filesystem, exactly like a mount --bind would be. But, any file created, opened, or read on my filesystem is not released (the corresponding FD is not closed) Here is an example : yume% ./ProxyFs.py `pwd`/test yume% cd test yume% ls mdr yume% echo test test yume% ls mdr test yume% ps auxwww | grep python cor 22822 0.0 0.0 43596 4696 ? Ssl 12:57 0:00 python ./ProxyFs.py /home/cor/esl/proxyfs/test cor 22873 0.0 0.0 6352 812 pts/1 S+ 12:58 0:00 grep python yume% ls -l /proc/22822/fd total 0 lrwx------ 1 cor cor 64 2010-05-27 12:58 0 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 1 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 2 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 3 - /dev/fuse l-wx------ 1 cor cor 64 2010-05-27 12:58 4 - /home/cor/test/test yume% Does anyone have a solution to actually really close the fds of the file I use in my fs ? I'm pretty sure it's a mistake in the implementation of the open, read, write hooks but i'm stucked... Let me know if you need more details ! Thanks a lot Cor

    Read the article

  • How lucene indexing ?

    - by user312140
    Hello I read some document about lucene ; also i read the document in this link ( http://lucene.sourceforge.net/talks/pisa ) . I don't really understand how lucene index documents and don't understand lucene work with which algorithm for indexing ? On above link , said lucene use this algorithm for indexing : * incremental algorithm: o maintain a stack of segment indices o create index for each incoming document o push new indexes onto the stack o let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How this algorithm help us to have an optimize indexing ? Does lucene use B-tree algorithm or any other algorithm like that for indexing or have a paticular algorithm ? Thank you for reading my post .

    Read the article

  • Compressing a database to a single file?

    - by Assimilater
    Hi all. In my contact manager program I have been storing information by reading and writing comma delimited files for each individual contact, and storing notes in a file for each note, and I'm wondering how I could go about shrinking them all into one file effectively. I have attempted using data entry tools in the visual studio toolbox and template class, though I have never quite figured out how to use them. What would be especially convenient is if I could store data as data type IOwner (a class I created) as opposed to strings. I'd also need to figure out how to tell the program what to do when a file is opened (I've noticed in the properties how to associate a file type with the program though am not sure how to tell it what to do when it's opened). Edit: How about rephrasing the question: I have a class IContact with various properties some of them being lists of other class objects. I have a public list of IContact. Can I write Contacts as List(Of IContact) to a file as opposed to a bunch of strings? Second part of the question: I have associated .cms files with my program. But if a user opens the file, what code should the program run through in an attempt to deal with the file? This file is going to contain data that the program needs to read, how do I tell it to read a file when the program is opened vicariously because the file was opened? Does this make the question clearer?

    Read the article

  • Problem with reading and writing to binary file in C++

    - by Reem
    I need to make a file that contains "name" which is a string -array of char- and "data" which is array of bytes -array of char in C++- but the first problem I faced is how to separate the "name" from the "data"? newline character could work in this case (assuming that I don't have "\n" in the name) but I could have special characters in the "data" part so there's no way to know when it ends so I'm putting an int value in the file before the data which has the size of the "data"! I tried to do this with code as follow: if((fp = fopen("file.bin","wb")) == NULL) { return false; } char buffer[] = "first data\n"; fwrite( buffer ,1,sizeof(buffer),fp ); int number[1]; number[0]=10; fwrite( number ,1,1, fp ); char data[] = "1234567890"; fwrite( data , 1, number[0], fp ); fclose(fp); but I didn't know if the "int" part was right, so I tried many other codes including this one: char buffer[] = "first data\n"; fwrite( buffer ,1,sizeof(buffer),fp ); int size=10; fwrite( &size ,sizeof size,1, fp ); char data[] = "1234567890"; fwrite( data , 1, number[0], fp ); I see 4 "NULL" characters in the file when I open it instead of seeing an integer. Is that normal? The other problem I'm facing is reading that again from the file! The code I tried to read didn't work at all :( I tried it with "fread" but I'm not sure if I should use "fseek" with it or it just read the other character after it. Forgive me but I'm a beginner :(

    Read the article

  • Perl: How do I capture Chinese input via SCIM with STDIN?

    - by KCArpe
    Hi, I use SCIM on Linux for Chinese and Japanese language input. Unfortunately, when I try to capture input using Perl's STDIN, the input is crazy. As roman characters are typed, SCIM tries to guess the correct final characters. ^H (backspace) codes are used to delete previously suggested chars on the command line. (As you type, SCIM tries to guess final Asian chars and displays them.) However, these backspace chars are shown literally as ^H and not interpreted correctly. Example one-liner: perl -e 'print "Chinese: "; my $s = <STDIN>; print $s' When I enable SCIM Chinese or Japanese language input, as I type, e.g., nihao = ??, here is the result: ?^H?^H?^H?^H?^H??^H^H??^H^H??^H^H??^H^H??^H^H??^H^H??^H^H??^H^H??^H^H?? At the very end of this string, you can see "??" (nihao/hello). At a normal bash prompt, if I type nihao (with Chinese enabled), the results is perfect. This has something to do with interpretation of backspace chars (or control chars) during Perl's STDIN. The same thing happens when using command 'read' in Bash. Witness: read -p 'Chinese: ' s && echo $s Cheers, Kevin

    Read the article

  • Android - Parsing XML with XPath

    - by Ruben Deig Ramos
    First of all, thanks to all the people who's going to spend a little time on this question. Second, sorry for my english (not my first language! :D). Well, here is my problem. I'm learning Android and I'm making an app which uses a XML file to store some info. I have no problem creating the file, but trying to read de XML tags with XPath (DOM, XMLPullParser, etc. only gave me problems) I've been able to read, at least, the first one. Let's see the code. Here is the XML file the app generates: <dispositivo> <id>111</id> <nombre>Name</nombre> <intervalo>300</intervalo> </dispositivo> And here is the function which reads the XML file: private void leerXML() { try { XPathFactory factory=XPathFactory.newInstance(); XPath xPath=factory.newXPath(); // Introducimos XML en memoria File xmlDocument = new File("/data/data/com.example.gps/files/devloc_cfg.xml"); InputSource inputSource = new InputSource(new FileInputStream(xmlDocument)); // Definimos expresiones para encontrar valor. XPathExpression tag_id = xPath.compile("/dispositivo/id"); String valor_id = tag_id.evaluate(inputSource); id=valor_id; XPathExpression tag_nombre = xPath.compile("/dispositivo/nombre"); String valor_nombre = tag_nombre.evaluate(inputSource); nombre=valor_nombre; } catch (Exception e) { e.printStackTrace(); } } The app gets correctly the id value and shows it on the screen ("id" and "nombre" variables are assigned to a TextView each one), but the "nombre" is not working. What should I change? :) Thanks for all your time and help. This site is quite helpful! PD: I've been searching for a response on the whole site but didn't found any.

    Read the article

  • ASP ListView - Eval() as formatted number, Bind() as unformatted?

    - by chucknelson
    I have an ASP ListView, and have a very simple requirement to display numbers as formatted w/ a comma (12,123), while they need to bind to the database without formatting (12123). I am using a standard setup - ListView with a datasource attached, using Bind(). I converted from some older code, so I'm not using ASP.NET controls, just form inputs...but I don't think it matters for this: <asp:SqlDataSource ID="MySqlDataSource" runat="server" ConnectionString='<%$ ConnectionStrings:ConnectionString1 %>' SelectCommand="SELECT NUMSTR FROM MY_TABLE WHERE ID = @ID" UpdateCommand= "UPDATE MY_TABLE SET NUMSTR = @NUMSTR WHERE ID = @ID"> </asp:SqlDataSource> <asp:ListView ID="MyListView" runat="server" DataSourceID="MySqlDataSource"> <LayoutTemplate> <div id="itemplaceholder" runat="server"></div> </LayoutTemplate> <ItemTemplate> <input type="text" name="NUMSTR" ID="NUMSTR" runat="server" value='<%#Bind("NUMSTR")%>' /> <asp:Button ID="UpdateButton" runat="server" Text="Update" Commandname="Update" /> </ItemTemplate> </asp:ListView> In the example above, NUMSTR is a number, but stored as a string in a SqlServer 2008 database. I'm also using the ItemTemplate as read and edit templates, to save on duplicate HTML. In the example, I only get the unformatted number. If I convert the field to an integer (via the SELECT) and use a format string like Bind("NUMSTR", "{0:###,###}"), it writes the formatted number to the database, and then fails when it tries to read it again (can't convert with the comma in there). Is there any elegant/simple solution to this? It's so easy to get the two-way binding going, and I would think there has to be a way to easily format things as well... Oh, and I'm trying to avoid the standard ItemTemplate and EditItemTemplate approach, just for sheer amount of markup required for that. Thanks!

    Read the article

  • Handling very large lists of objects without paging?

    - by user246114
    Hi, I have a class which can contain many small elements in a list. Looks like: public class Farm { private ArrayList<Horse> mHorses; } just wondering what will happen if the mHorses array grew to something crazy like 15,000 elements. I'm assuming that trying to write and read this from the datastore would be crazy, because I'd get killed on the serialization process. It's important that I can get the entire array in one shot without paging, and each Horse element may only have two string properties in it, so they are pretty lightweight: public class Horse { private String mId; private String mName; } I don't need these horses indexed at all. Does it sound reasonable to just store the mHorse array as a raw Text field, and force my clients to do the deserialization? Something like: public class Farm { private Text mHorsesSerialized; } then whenever the client receives a Farm instance, it has to take the raw string of horses, and split it in order to reinstantiate the list, something like: // GWT client perhaps Farm farm = rpcCall.getMyFarm(); String horsesSerialized = farm.getHorses(); String[] horseBlocks = horsesSerialized.split(","); for (int i = 0; i < horseBlocks.length; i++) { // .. continue deserializing the individual objects ... } yeah... so hopefully it would be quick to read a Farm instance from the datastore, and the serialization penalty is paid by the client, Thanks

    Read the article

  • SQLite DB open time really long Problem

    - by sxingfeng
    I am using sqlite in c++ windows, And I have a db size about 60M, When I open the sqlite db, It takes about 13 second. sqlite3* mpDB; nRet = sqlite3_open16(szFile, &mpDB); And if I closed my application and reopen it again. It takse only less then 1 second. First, I thought It is because of disk cache. So I preload the 60M db file before sqlite open, and read the file using CFile, However, after preloading, the first time is still very slow. BOOL CQFilePro::PreLoad(const CString& strPath) { boost::shared_array<BYTE> temp = boost::shared_array<BYTE>(new BYTE[PRE_LOAD_BUFFER_LENGTH]); int nReadLength; try { CFile file; if (file.Open(strPath, CFile::modeRead) == FALSE) { return FALSE; } do { nReadLength = file.Read(temp.get(), PRE_LOAD_BUFFER_LENGTH); } while (nReadLength == PRE_LOAD_BUFFER_LENGTH); file.Close(); } catch(...) { } return TRUE; } My question is what is the difference between first open and second open. How can I accelerate the sqlite open-process.

    Read the article

  • Java Out of memory - Out of heap size

    - by user1907849
    I downloaded the sample program , for file transfer between the client and server. When I try running the program with 1 GB file , I get the Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at Client.main(Client.java:31). Edit: Line no 31: byte [] mybytearray = new byte [FILE_SIZE]; public final static int FILE_SIZE = 1097742336; // receive file long startTime = System.nanoTime(); byte [] mybytearray = new byte [FILE_SIZE]; InputStream is = sock.getInputStream(); fos = new FileOutputStream(FILE_TO_RECEIVED); bos = new BufferedOutputStream(fos); bytesRead = is.read(mybytearray,0,mybytearray.length); current = bytesRead; do { bytesRead = is.read(mybytearray, current, (mybytearray.length-current)); if(bytesRead >= 0) current += bytesRead; } while(bytesRead > -1); bos.write(mybytearray, 0 , current); bos.flush(); Is there any fix for this?

    Read the article

  • How to push further as a programmer?

    - by MaXX
    For the last, hmm, 6 months I've been reading into Programming in C, I got myself K&Rv2, BEEJ's socket guide, Expert C programming, Linux Systems Programming, the ISO/IEC 9899:1999 specification (real, and not draft). After receiving them from Amazon, I got Linux installed, and got to it. I'm done with K&R, about halfway through Expert C Programming, but still feel weak as a programmer, I'm sure it takes much more than 6 months of reading to become truly skilled, but my question is this: I've done all the exercises in K&Rv2 (in chapter 1) and some in other chapters, most of which are generally really boring. How do I lift my skills, and become truly great? I've invested money, time and a general lifestyle for something I truly desire, but I'm not sure how exactly to achieve it. Could someone explain to me, perhaps if I need to continuously code, what exactly I'm to code? I'm pretty sure, coding up hello world programs isn't going to teach me any more than I already know about anything. A friend of mine said "read" (with emphasis on read) a man page a day, but reading is all I do, I want to do, but I'm not sure what! I'm interested in security, but I'm not sure as a novice what to code that would be considered enough. Ah, I hope you don't delete this paste :) Thanks

    Read the article

  • BufferedImage & ColorModel in Java

    - by spol
    I am using a image processing library in java to manipulate images.The first step I do is I read an image and create a java.awt.Image.BufferedImage object. I do it in this way, BufferedImage sourceImage = ImageIO.read( new File( filePath ) ); The above code creates a BufferedImage ojbect with a DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0. This is what happens when I run the above code on my macbook. But when I run this same code on a linux machine (hosted server), this creates a BufferedImage object with ColorModel: #pixelBits = 24 numComponents = 3 color space = java.awt.color.ICC_ColorSpace@c39a20 transparency = 1 has alpha = false isAlphaPre = false. And I use the same jpg image in both the cases. I don't know why the color model on the same image is different when run on mac and linux. The colormodel for mac has 4 components and the colormodel for linux has 3 components.There is a problem arising because of this, the image processing library that I use always assumes that there are always 4 components in the colormodel of the image passed, and it throws array out of bounds exception when run on linux box. But on macbook, it runs fine. I am not sure if I am doing something wrong or there is a problem with the library. Please let me know your thoughts. Also ask me any questions if I am not making sense!

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >