Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 589/672 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • How to get the "real" colors when exporting a GDI drawing to bmp

    - by Rodrigo
    Hi guys I'm developing a ASP.Net web handler that returns images making a in-memory System.Windows.Forms.Control and then exports the rendered control as a bitmap compressed to PNG, using the method DrawToBitmap(). The classes are fully working by now, except for a problem with the color assignment. By example, this is a Gauge generated by the web handler. The colors assigned to the inner parts of the gauge are red (#FF0000), yellow (#FFFF00) and green (#00FF00) but I only get a dully version of each color (#CB100F for red, #CCB70D for yellow and #04D50D for green). The background is a bmp file containing the color gradient. The color loss happens whether the background is a gradient as the sample, a black canvas, a white canvas, a transparent canvas, even when there is not a background set. With black background With transparent background With a white background Without a background set With pixel format in Format32bppArgb I've tried multiple bitmap color deeps, output formats, compression level, etc. but none of them seems to work. Any ideas? This is a excerpt of the source code: Bitmap bmp = new Bitmap(w, h, System.Drawing.Imaging.PixelFormat.Format32bppPArgb); Image bgimage = (Image) HttpContext.GetGlobalResourceObject("GraphicResources", "GaugeBackgroundImage"); Canvas control_canvas = new Canvas(); .... //the routine that makes the gauge .... control_canvas.DrawToBitmap(bmp, new Rectangle(0, 0, w, h));

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • Problem with .Contains

    - by Rene
    Hello there, I have a little problem which i can't solve. I want to use an SQL-In-Statement in Linq. I've read in this Forum and in other Forums that I have to use the .Contains (with reverse-thinking-notation :-)). As input i have a List of Guids. I first copied them into an array and then did something like that : datatoget = (from p in objectContext.MyDataSet where ArrayToSearch.Contains(p.Subtable.Id.ToString()) select p).ToList(); datatoget is the result in which all records matching the Subtable.Id (which is a Guid) should be stored. Subtable is a Detail-Table from MyData, and the Id is a Guid-Type. I've tried several things (Convert Guid to String, and then using .Contains, etc), but I always get an Exception which says : 'Linq to Entities' doesn't recognize the Method 'Boolean Contains(System.Guid) and is not able to Translate this method into a memory expression. (Something like that, because I'm using the German Version of VS2008) I am using L2E with .NET 3.5 and am programming in C# with VS 2008. I've read several Examples, but it doesn't work. Is it perhaps because of using Guid's instead of strings ? I've also tried to write my own compare-function, but I don't know how to integrate it so that .NET calls my function for comparing.

    Read the article

  • Tips on managing dependencies for a release?

    - by Andrew Murray
    Our system comprises many .NET websites, class libraries, and a MSSQL database. We use SVN for source control and TeamCity to automatically build to a Test server. Our team is normally working on 4 or 5 projects at a time. We try to lump many changes into a largish rollout every 2-4 weeks. My problem is with keeping track of all the dependencies for a rollout. Example: Website A cannot go live until we've rolled out Branch X of Class library B, built in turn against the Trunk of Class library C, which needs Config Updates Y and Z and Database Update D, which needs Migration Script E... It gets even more complex - like making sure each developer's project is actually compatible with the others and are building against the same versions. Yes, this is a management issue as much as a technical issue. Currently our non-optimal solution is: a whiteboard listing features that haven't gone live yet relying on our memory and intuition when planning the rollout, until we're pretty sure we've thought of everything... a dry-run on our Staging environment. It's a good indication but we're often not sure if Staging is 100% in sync with Live - part of the problem I'm hoping to solve. some amount of winging it on rollout day. So far so good, minus a few close calls. But as our system grows, I'd like a more scientific release management system allowing for more flexibility, like being able to roll out a single change or bugfix on it's own, safe in the knowledge that it won't break anything else. I'm guessing the best solution involves some sort of version numbering system, and perhaps using a project management tool. We're a start-up, so we're not too hot on religiously sticking to rigid processes, but we're happy to start, providing it doesn't add more overhead than it's worth. I'd love to hear advice from other teams who have solved this problem.

    Read the article

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • Creating a DataTable by filtering another DataTable

    - by Jeff Dege
    I'm working on a system that currently has a fairly complicated function that returns a DataTable, which it then binds to a GUI control on a ASP.NET WebForm. My problem is that I need to filter the data returned - some of the data that is being returned should not be displayed to the user. I'm aware of DataTable.select(), but that's not really what I need. First, it returns an array of DataRows, and I need a DataTable, so I can databind it to the GUI control. But more importantly, the filtering I need to do isn't something that can be easily put into a simple expression. I have an array of the elements which I do not want displayed, and I need to compare each element from the DataTable against that array. What I could do, of course, is to create a new DataTable, reading everything out of the original, adding to the new what is appropriate, then databinding the new to the GUI control. But that just seems wrong, somehow. In this case, the number of elements in the original DataTable aren't likely to be enough that copying them all in memory is going to cause too much trouble, but I'm wondering if there is another way. Does the .NET DataTable have functionality that would allow me to filter via a callback function?

    Read the article

  • Files not waiting for each other

    - by Sunny
    I have two batch files as follows in which file2.bat is dependent on file1.bat's output: file1.bat @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "Appprocess.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (App1 App2 App3 App4 App5 App6 ) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>result.txt GOTO :EOF file2.bat @echo off setlocal enabledelayedexpansion (for /f "tokens=1,2" %%a in (memory.txt) do ( for /f "tokens=5" %%c in ('find " %%a " ^< result.txt ') do echo %%c %%b ))> new.txt file1.bat usually takes 60 sec to complete its execution. In master.bat file i am calling above two files as: call file1.bat call file2.bat but file2.bat is not waiting for file1.bat to complete its execution. Even , i tried to call file2.bat within file1.bat as below but still its not waiting for file1.bat to get completed: @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "HsvDataSource.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (EUHFMPROD USHFMPROD TL2TEST GSHFMPROD TL2PROD GSARCH1213 TL2FY13) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>file2.txt GOTO :EOF call file1.bat I also tried below start option, but no effect.: start file1.bat /wait call file2.bat Not getting ..why its happening..?

    Read the article

  • should std::auto_ptr<>::operator = reset / deallocate its existing pointee ?

    - by afriza
    I read here about std::auto_ptr<::operator= Notice however that the left-hand side object is not automatically deallocated when it already points to some object. You can explicitly do this by calling member function reset before assigning it a new value. However, when I read the source code for header file C:\Program Files\Microsoft Visual Studio 8\VC\ce\include\memory template<class _Other> auto_ptr<_Ty>& operator=(auto_ptr<_Other>& _Right) _THROW0() { // assign compatible _Right (assume pointer) reset(_Right.release()); return (*this); } auto_ptr<_Ty>& operator=(auto_ptr<_Ty>& _Right) _THROW0() { // assign compatible _Right (assume pointer) reset(_Right.release()); return (*this); } auto_ptr<_Ty>& operator=(auto_ptr_ref<_Ty> _Right) _THROW0() { // assign compatible _Right._Ref (assume pointer) _Ty **_Pptr = (_Ty **)_Right._Ref; _Ty *_Ptr = *_Pptr; *_Pptr = 0; // release old reset(_Ptr); // set new return (*this); } What is the correct/standard behavior? How do other STL implementations behave?

    Read the article

  • C++ -- Is there an implicit cast here from Fred* to auto_ptr<Fred>?

    - by q0987
    Hello all, I saw the following code, #include <new> #include <memory> using namespace std; class Fred; // Forward declaration typedef auto_ptr<Fred> FredPtr; class Fred { public: static FredPtr create(int i) { return new Fred(i); // Is there an implicit casting here? If not, how can we return // a Fred* with return value as FredPtr? } private: Fred(int i=10) : i_(i) { } Fred(const Fred& x) : i_(x.i_) { } int i_; }; Please see the question listed in function create. Thank you // Updated based on comments Yes, the code cannot pass the VC8.0 error C2664: 'std::auto_ptr<_Ty::auto_ptr(std::auto_ptr<_Ty &) throw()' : cannot convert parameter 1 from 'Fred *' to 'std::auto_ptr<_Ty &' The code was copied from the C++ FAQ 12.15. However, after making the following changes, replace return new Fred(i); with return auto_ptr<Fred>(new Fred(i)); This code can pass the VC8.0 compiler. But I am not sure whether or not this is a correct fix.

    Read the article

  • How to implement Session timeout in Web Server Side?

    - by Morgan Cheng
    I beheld a web framework implementing in-memory session in this way. The session object is added to Cache with timeout. When the time is out, the session is removed from Cache automatically. To protect race condition, each request should acquire lock on given session object to proceed. Each request will "touch" the session in Cache to refresh the timeout. Everything looks fine, until this scenario is discovered. Say, one operation takes a long time, longer than timeout. Another request comes and wait on session lock which is currently hold by the long-time request. Finally, the long-time request is over, it releases the lock. But, since it already takes longer time than timeout, the session object is already removed from Cache. This is obvious because the only request holding the lock doesn't have a chance to "touch" the session object in cache. The second request gets the lock but cannot retrieve the expired Session object. Oops... To fix this issue, the second request has to re-create the Session object. But, this is just like digging a buried dead body from tomb and try to bring it back to life. It causes buggy code. I'm wondering what's the best way to implement timeout in session to handle such scenario. I know that current platform must have good session mechanism. I just want to know the under-the-hood how.

    Read the article

  • ASP.NET 2.0 and COM Port Communication

    - by theaviator
    ASP.NET 2.0 and COM Port Communication Hello Guys, I have a managed DLL which communicates with the devices attached on COM/Serial ports. The desktop Winforms application sends requests on ports and receives/stores data in memory. In Winforms app I have added a reference to DLL and I am using the methods. This works well. Now, there is a situation where I need to show this data from serial/com port on a web-page. And also users should be able to send requests to the ports using this DLL. I have made a web app in ASP.NET (2.0). Added a reference to the DLL. I am able to use this DLL, the DLL communicates on the COM upon button click on web-page and also the response is shown on web page. However I am not happy with the approach and strongly feel that this is a bad approach. Also the development server crashes after 3 -4 requests. What is the best approach in this scenario. If I use a windows service then how would my ASP.net app will communicate with the Weindows service. Or can this be easily done using WCF. I have not used WCF any time nor any of .net remoting technique. Please suggest me the best architecture in this scenario. Thank you

    Read the article

  • NHibernate.MappingException - Troubles Shooting Checklist (no persister for)

    - by Berryl
    Here's a starter list: 1) if hbm is hand generated, is it an embedded resource? 2) if using FNH, does it pass a PerssistenceSpecification test? 3) if not using FNH, can you save and then load the persisted class? 4) more? I'm sure many of you have gotten this one at one point or another. But have you ever gotten it when you knew your mapping was set up correctly? I started getting this exception after I started using a new repository design, but only in one scenario! PersistenceSpecification tests pass, as do all repository methods (using SQLite). The scenario that leads to the exception is when legacy projects from a different db are converted to green field system. The legacy system is from a different database and has it's own session factory, which should be irrelevant because the error comes after previously unconverted Projects are retrieved and in memory. As the routine tries to save these unconverted Projects into the new database, the exception is thrown, full stack trace below. Any ideas on how to build up the trouble shooting check list and solves this problem? Cheers, Berryl === the Exception trace ===== failed: NHibernate.MappingException : No persister for: Smack.ConstructionAdmin.Domain.Model.Projects.Project at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Engine.ForeignKeys.IsTransient(String entityName, Object entity, Nullable`1 assumed, ISessionImplementor session) at NHibernate.Event.Default.AbstractSaveEventListener.GetEntityState(Object entity, String entityName, EntityEntry entry, ISessionImplementor source) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.SaveOrUpdate(Object obj) NHibernate\Repository\FabioNHibRepository.cs(46,0): at Smack.Core.Data.NHibernate.Repository.FabioNHibRepository`1.Add(T item) LegacyConversion\LegacyBatchUpdater.cs(20,0): at Smack.ConstructionAdmin.Data.LegacyConversion.LegacyBatchUpdater.ConvertOpenLegacyProjects(ILegacyProjectDao legacyProjectDao, IProjectRepository greenProjectRepository) Data\Brownfield\ProjectBatchUpdate_SQLiteTests.cs(19,0): at Smack.ConstructionAdmin.Tests.Data.Brownfield.ProjectBatchUpdate_SQLiteTests.Test()

    Read the article

  • Compiling C lib and OCaml exe using it, all using ocamlfind

    - by Magnus
    I'm trying to work out how to use ocamlfind to compile a C library and an OCaml executable using that C library. I put together a set of rather silly example files. % cat sillystubs.c #include <stdio.h> #include <caml/mlvalues.h> #include <caml/memory.h> #include <caml/alloc.h> #include <caml/custom.h> value caml_silly_silly( value unit ) { CAMLparam1( unit ); printf( "%s\n", __FILE__ ); CAMLreturn( Val_unit ); } % cat silly.mli external silly : unit -> unit = "silly_silly" % cat foo.ml open Silly open String let _ = print_string "About to call into silly"; silly (); print_string "Called into silly" I believe the following is the way to compile up the library: % ocamlfind ocamlc -c sillystubs.c % ar rc libsillystubs.a sillystubs.o % ocamlfind ocamlc -c silly.mli % ocamlfind ocmalc -a -o silly.cma -ccopt -L${PWD} -cclib -lsillystubs Now I don't seem to be able to use the created library though: % ocamlfind ocamlc -custom -o foo foo.cmo silly.cma /usr/bin/ld: cannot find -lsillystubs collect2: ld returned 1 exit status File "_none_", line 1, characters 0-1: Error: Error while building custom runtime system The OCaml tools are somewhat mysterious to me, so any pointers would be most welcome.

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

  • Apache with JBOSS using AJP (mod_jk) giving spikes in thread count.

    - by Beginner
    We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port. Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know. Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? The versions I used are as follows: Apache 2.0.52 JBOSS: 4.2.2 MOD_JK: 1.2.20 JDK: 1.6 Operating System: RHEL 4 Thanks for the help.

    Read the article

  • Optimize Use of Ramdisk for Eclipse Development

    - by Eric J.
    We're developing Java/SpringSource applications with Eclipse on 32-bit Vista machines with 4GB RAM. The OS exposes roughly 3.3GB of RAM due to reservations for hardware etc. in the virtual address space. I came across several Ramdisk drivers that can create a virtual disk from the OS-hidden RAM and am looking for suggestions how best to use the 740MB virtual disk to speed development in our environment. The slowest part of development for us is compiling as well as launching SpringSource dm Server. One option is to configure Vista to swap to the Ramdisk. That works, and noticeably speeds up development in low memory situations. However, the 3.3GB available to the OS is often sufficient and there are many situations where we do not use the swap file(s) much. Another option is to use the Ramdisk as a location for temporary files. Using the Vista mklink command, I created a hard link from where the SpringSource dm Server's work area normally resides to the Ramdisk. That significantly improves server startup times but does nothing for compile times. There are roughly 500MB still free on the Ramdisk when the work directory is fully utilized, so room for plenty more. What other files/directories might be candidates to place on the Ramdisk? Eclipse-related files? (Parts of) the JDK? Is there a free/open source tool for Vista that will show me which files are used most frequently during a period of time to reduce the guesswork?

    Read the article

  • Behavior of retained property while holder is retained

    - by Aurélien Vallée
    Hello everyone, I am a beginner ObjectiveC programmer, coming from the C++ world. I find it very difficult to understand the memory management offered by NSObject :/ Say I have the following class: @interface User : NSObject { NSString* name; } @property (nonatomic,retain) NSString* name; - (id) initWithName: (NSString*) theName; - (void) release; @end @implementation User @synthesize name - (id) initWithName: (NSString*) theName { if ( self = [super init] ) { [self setName:theName]; } return self; } - (void) release { [name release]; [super release]; } @end No considering the following code, I can't understand the retain count results: NSString* name = [[NSString alloc] initWithCString:/*C string from sqlite3*/]; // (1) name retainCount = 1 User* user = [[User alloc] initWithName:name]; // (2) name retainCount = 2 [whateverMutableArray addObject:user]; // (3) name retainCount = 2 [user release]; // (4) name retainCount = 1 [name release]; // (5) name retainCount = 0 At (4), the retain count of name decreased from 2 to 1. But that's not correct, there is still the instance of user inside the array that points to name ! The retain count of a variable should only decrease when the retain count of a referring variable is 0, that is, when it is dealloced, not released.

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Please critisize this method

    - by Jakob
    Hi I've been looking around the net for some tab button close functionality, but all those solutions had some complicated eventhandler, and i wanted to try and keep it simple, but I might have broken good code ethics doing so, so please review this method and tell me what is wrong. public void AddCloseItem(string header, object content){ //Create tabitem with header and content StackPanel headerPanel = new StackPanel() { Orientation = Orientation.Horizontal, Height = 14}; headerPanel.Children.Add(new TextBlock() { Text = header }); Button closeBtn = new Button() { Content = new Image() { Source = new BitmapImage(new Uri("images/cross.png", UriKind.Relative)) }, Margin = new Thickness() { Left = 10 } }; headerPanel.Children.Add(closeBtn); TabItem newTabItem = new TabItem() { Header = headerPanel, Content = content }; //Add close button functionality closeBtn.Tag = newTabItem; closeBtn.Click += new RoutedEventHandler(closeBtn_Click); //Add item to list this.Add(newTabItem); } void closeBtn_Click(object sender, RoutedEventArgs e) { this.Remove((TabItem)((Button)sender).Tag); } So what I'm doing is storing the tabitem in the btn.Tag property, and then when the button is clicked i just remove the tabitem from my observablecollection, and the UI is updated appropriately. Am I using too much memory saving the tabitem to the Tag property?

    Read the article

  • ORGetValue from Offline Registry - ERROR_MORE_DATA

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • Cannot read values from [NSUserDefaults standardUserDefaults] after synchronize

    - by Nonlinearsound
    In the Application Delegate didFinishLaunching method, I am using the following code to build up a new NSDictionary to be used as the new settings bundle for the user: NSNumber *testValue = (NSNumber*)[[NSUserDefaults standardUserDefaults] objectForKey:@"settingsversion"]; if (testValue == nil) { NSNumber *numNewDB = [NSNumber numberWithBool:NO]; NSNumber *numFirstUse = [NSNumber numberWithBool:YES]; NSDate *dateLastStatic = [NSDate date]; NSDate *dateLastMobile = [NSDate date]; NSNumber *numSettingsversion = [NSNumber numberWithFloat:1.0]; NSDictionary *appDefaults = [NSDictionary dictionaryWithObjectsAndKeys: numNewDB, @"newdb", numFirstUse, @"firstuse", numSettingsversion, @"settingsversion", dateLastStatic, @"laststaticupdate", dateLastMobile, @"lastmobileupdate", nil]; [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults]; [[NSUserDefaults standardUserDefaults] synchronize]; } Later in another ViewController I am trying to read back a value from that same Dictionary, saved as the NSUserDefaults - well at least I thought it would, but I don't get any valid object pointer for the desired member lastUpdate back there: in .h file: NSDate *lastUpdate; in the .m file in a member function: lastUpdate = (NSDate *)[[NSUserDefaults standardUserDefaults] objectForKey:@"laststaticupdate"]; Even, if I print out the content of [NSUserDefaults standardUserDefaults] I only get this: 2010-04-29 15:13:22.322 myApp[4136:207] Content of UserDefaults: <NSUserDefaults: 0x11d340> This leads me to the conclusion that there is no standardUserDefaults dictionary somewhere in memory or it cannot be determined as such a structure. Edit: Every time, I restart the app ión the device, the check for testValue is Nil and I am building up the dictionary again but after one run it should be persistent on the Phone, right? Am I doing something wrong somewhere in between? I have the feeling that I yet didn't really understand how to load and save settings persistent for a certain application on the iPhone. Is there anything I have to do additionally to this? Integrating a settings.bundle in XCode or saving the dictionary manually to the Documents folder? Can someone help me out here? Thanks a lot!

    Read the article

  • Could not determine the size of this expression.

    - by Søren
    Hi, I have resently started to use MATLAB Simulink, and my problem is that i can't implement an AMDF function, because simulink compiler cannot determine the lengths. Simulink errors: |--------------------------------------------------------------------------------------- Could not determine the size of this expression. Function 'Embedded MATLAB Function2' (#38.728.741), line 33, column 32: "1:flength-k+1" Errors occurred during parsing of Embedded MATLAB function 'Embedded MATLAB Function2'(#38) Embedded MATLAB Interface Error: Errors occurred during parsing of Embedded MATLAB function 'Embedded MATLAB Function2'(#38) . |--------------------------------------------------------------------------------------- MY CODE: |--------------------------------------------------------------------------------------- persistent sLength persistent fLength persistent amdf % Length of the frame flength = length(frame); % Pitch period is between 2.5 ms and 19.5 ms for LPC-10 algorithm % This because this algorithm assumes the frequencyspan is 50 and 400 Hz pH = ceil((1/min(fspan))*fs); if(pH flength) pH = flength; end; pL = ceil((1/max(fspan))*fs); if(pL <= 0 || pL = flength) pL = 0; end; sLength = pH - pL; % Normalize the frame frame = frame/max(max(abs(frame))); % Allocating memory for the calculation of the amdf %amdf = zeros(1,sLength); %%%%%%%% amdf = 0; % Calculating the AMDF with unbiased normalizing for k = (pL+1):pH amdf(k-pL) = sum(abs(frame(1:flength-k+1) - frame(k:flength)))/(flength-k+1); end; % Output of the AMDF if(min(amdf) < lvlThr) voiced = 1; else voiced = 0; end; % Output of the minimum of the amdf minAMDF = min(amdf); |---------------------------------------------------------------------------------------- HELP Kind regards Søren

    Read the article

  • WPF, C# - Making Intellisense/Autocomplete list, fastest way to filter list of strings

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >