Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 333/402 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • How can I initialize a QTMovie object with certain attributes using writable data?

    - by c-had
    I'm trying to create an empty QTMovie object that I can add segments to, and then play. This is easy to do with something like: movie = [[QTMovie alloc] initToWritableData:[NSMutableData dataWithCapacity:1048576] error:&error]; I can then use -insertSegmentOfMovie to insert segments from other movies into this one so I can play it back. The problem is that I also need to set a certain attribute when creating the QTMovie object. In particular, I need to set the QTMovieRateChangesPreservePitchAttribute attribute, so that I can alter playback speed during playback without changing pitch. This attribute cannot be written after the movie is initialized. So, I can create the QTMovie object like this: movie = [[QTMovie alloc] initWithAttributes:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], QTMovieRateChangesPreservePitchAttribute, nil] error:&error]; Unfortunately, this is not editable. I've tried setting the QTMovieEditableAttribute as well on creation, but it does not help. I still get an exception when I try to insert anything into this movie. I presume this is because there is no writable file or data reference associated with the QTMovie. Any ideas on how to solve this?

    Read the article

  • How to provide js-ctypes in a spidermonkey embedding?

    - by Triston J. Taylor
    Summary I have looked over the code the SpiderMonkey 'shell' application uses to create the ctypes JavaScript object, but I'm a less-than novice C programmer. Due to the varying levels of insanity emitted by modern build systems, I can't seem to track down the code or command that actually links a program with the desired functionality. method.madness This js-ctypes implementation by The Mozilla Devs is an awesome addition. Since its conception, scripting has been primarily used to exert control over more rigorous and robust applications. The advent of js-ctypes to the SpiderMonkey project, enables JavaScript to stand up and be counted as a full fledged object oriented rapid application development language flying high above 'the bar' set by various venerable application development languages such as Microsoft's VB6. Shall we begin? I built SpiderMonkey with this config: ./configure --enable-ctypes --with-system-nspr followed by successful execution of: make && make install The js shell works fine and a global ctypes javascript object was verified operational in that shell. Working with code taken from the first source listing at How to embed the JavaScript Engine -MDN, I made an attempt to instantiate the JavaScript ctypes object by inserting the following code at line 66: /* Populate the global object with the ctypes object. */ if (!JS_InitCTypesClass(cx, global)) return NULL; /* I compiled with: g++ $(./js-config --cflags --libs) hello.cpp -o hello It compiles with a few warnings: hello.cpp: In function ‘int main(int, const char**)’: hello.cpp:69:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] hello.cpp:80:20: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] hello.cpp:89:17: warning: NULL used in arithmetic [-Wpointer-arith] But when you run the application: ./hello: symbol lookup error: ./hello: undefined symbol: JS_InitCTypesClass Moreover JS_InitCTypesClass is declared extern in 'dist/include/jsapi.h', but the function resides in 'ctypes/CTypes.cpp' which includes its own header 'CTypes.h' and is compiled at some point by some command during 'make' to yeild './CTypes.o'. As I stated earlier, I am less than a novice with the C code, and I really have no idea what to do here. Please give or give direction to a generic example of making the js-ctypes object functional in an embedding.

    Read the article

  • Does ASP.NET need to be configured for Full Trust to implement 'PageHandlerFactory' ?

    - by Kev
    Our hosting platform (running IIS6/ASP.NET 2.0) is configured to run under partial trust. In the machine wide web.config file we set the ASP.NET trust level to Medium (and lock to prevent overrides) and use a modified policy file. When trying to add a custom HttpHandler to handle .aspx requests for a website running in this configuration I get the following security exception: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request failed.] System.Reflection.Assembly._GetType(String name, Boolean throwOnError, Boolean ignoreCase) +0 System.Reflection.Assembly.GetType(String name, Boolean throwOnError, Boolean ignoreCase) +42 System.Web.Compilation.CompilationUtil.GetTypeFromAssemblies(AssemblyCollection assembliesCollection, String typeName, Boolean ignoreCase) +172 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +291 System.Web.Configuration.ConfigUtil.GetType(String typeName, String propertyName, ConfigurationElement configElement, XmlNode node, Boolean checkAptcaBit, Boolean ignoreCase) +52 I'm using a class derived from PageHandlerFactory, for example: public class MyPageHandlerFactory : PageHandlerFactory { public override System.Web.IHttpHandler GetHandler(System.Web.HttpContext context, string requestType, string virtualPath, string path) { // CustomPageHandler derives from System.Web.UI.Page return new CustomPageHandler(); } } My web.config httpHandler configuration is as follow: <httpHandlers> <add verb="*" path="*.aspx" type="MyPageHandler.MyPageHandlerFactory" /> </httpHandlers> The documentation for PageHandlerFactory shows that PageHandlerFactory is decorated with the following attributes: [PermissionSetAttribute(SecurityAction.LinkDemand, Unrestricted = true)] [PermissionSetAttribute(SecurityAction.InheritanceDemand, Unrestricted = true)] public class PageHandlerFactory : IHttpHandlerFactory Does this mean that I need to set ASP.NET to run at Full Trust to be able to create my own PageHandlerFactory classes?

    Read the article

  • What is SSIS order of data transformation component method calls

    - by Ron Ruble
    I am working on a custom data transformation component. I'm using NUnit and NMock2 to test as I code. Testing and getting the custom UI and other features right is a huge pain, in part because I can't find any documentation about the order in which SSIS invokes methods on the component at design time as well as runtime. I can correct the issues readily enough, but it's tedious and time consuming to unregister the old version, register the new version, fire up the test ssis package, try to display the UI, get an obscure error message, backtrace it, modify the component and continue. One of the big issues involves the UI component needing access to the componentmetadata and buffermanager properties of the component at design time, and what I need to provide for to support properties that won't be initialized until after the user enters them in the UI. I can work through it; but if someone knows of some docs or tips that would speed me up, I'd greatly appreciate it. The samples I've found havn't been much use; they seem to be directed to showing off cool stuff (Twitter, weather.com) rather than actual work. Thanks in advance.

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Temporary storage for keeping data between program iterations?

    - by mr.b
    I am working on an application that works like this: It fetches data from many sources, resulting in pool of about 500,000-1,500,000 records (depends on time/day) Data is parsed Part of data is processed in a way to compare it to pre-existing data (read from database), calculations are made, and stored in database. Resulting dataset that has to be stored in database is, however, much smaller in size (compared to original data set), and ranges from 5,000-50,000 records. This process almost always updates existing data, perhaps adds few more records. Then, data from step 2 should be kept somehow, somewhere, so that next time data is fetched, there is a data set which can be used to perform calculations, without touching pre-existing data in database. I should point out that this data can be lost, it's not irreplaceable (key information can be read from database if needed), but it would speed up the process next time. Application components can (and will be) run off different computers (in the same network), so storage has to be reachable from multiple hosts. I have considered using memcached, but I'm not quite sure should I do so, because one record is usually no smaller than 200 bytes, and if I have 1,500,000 records, I guess that it would amount to over 300 MB of memcached cache... But that doesn't seem scalable to me - what if data was 5x that amount? If it were to consume 1-2 GB of cache only to keep data in between iterations (which could easily happen)? So, the question is: which temporary storage mechanism would be most suitable for this kind of processing? I haven't considered using mysql temporary tables, as I'm not sure if they can persist between sessions, and be used by other hosts in network... Any other suggestion? Something I should consider?

    Read the article

  • ReportServer 2005 Custom Assembly Error

    - by user752083
    We have a reportserver on sql server 2005, it has previously been working on computer X which we are replacing with a computer Y. Computer X only had one default sql instance as our test environment, machine Y is a new machine with 2 sql instances, Y\TEST and Y\DEV. Both machines run Windows Server 2003 and Both have Sql Server 2005 installed together with Report Server We have not currently done any work on the DEV instance, just on the TEST instance. The reportserver is installed for TEST. SSRS previously worked as intended on X machine, and on Y\TEST its working for reports not using any custom code. However for some of the reports we are loading a custom assembly Localization. This assembly currently exists in following folders on server: C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PublicAssemblies C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportServer\bin Also in rssrvpolicy.config, the entry for Report_Expressions_Default_Permissions has been changed from Execution to FullTrust (Although this was not necessary on the machine X). When attempting to upload the rdl files through the web interface, we get the following errors: Error while loading code module: ‘Localization, Version=1.0.0.0, Culture=neutral, PublicKeyToken=e302ddd55ecd694a’. Details: Could not load file or assembly 'Localization, Version=1.0.0.0, Culture=neutral, PublicKeyToken=e302ddd55ecd694a' or one of its dependencies. The system cannot find the file specified. (rsErrorLoadingCodeModule) Get Online Help There is an error on line 14 of custom code: [BC30451] Name 'Localization' is not declared. (rsCompilerErrorInCode) Get Online Help Does anyone know any other places with potential errors?

    Read the article

  • Are there any configurable parameters to the gpsd?

    - by danatel
    I use the gpsd daemon with my application. Sometimes, the gpsd ceases to work with no apparent reason (clean sky). Even the gpsmon monitor shows no fix. Are there any parameters which must be set? Or is it a hardware problem? I am surprissed that many satellites are visible but the "Stat" bitmap does not contain the bit 7 - ephemeris data available. Should i somewhat pre-configure my position to allow for correct ephemeris data? Here is my gpsmon screen: 127.0.0.1:2947:/dev/ttyS3 SiRF binary> ^[[4~ -¦¦¦¦¦¦¦¦¦¦¦ X ¦¦¦¦¦¦ Y ¦¦¦¦¦¦ Z ¦¦¦¦¦¦¦¦¦¦ North ¦¦¦¦ East ¦¦¦¦¦ Alt ¦¦¦¦¦¦¦¦¦¬ -Pos: 3949260 1166016 4856299 m 49.89411° 16.44920° 1379 m - -Vel: 0.0 0.0 0.0 m/s 0.0 0.0 0.0 climb m/s- -Week+TOW:1578+224837.06 Day: 2 14:27:17.06 Heading: 0.0° 0.0 speed m/s- -Skew: -13.025817 TZ: -7200 HDOP: 0.0 M1:00 M2: 00 - -Fix: 0 = - L¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ Packet type 2 (0x02) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬-¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ -Ch PRN Az El Stat C/N ? A --Version: - - 0 2 243 19 003f 40.4 -L¦¦¦¦¦¦¦ Packet Type 6 (0x06) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 1 10 249 68 003f 43.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 2 13 90 30 003f 40.9 --SVs: 0 Drift: 96506 Bias: 135976716 - - 3 7 66 67 003f 39.8 --Estimated GPS Time: 224837059 - - 4 5 295 49 003d 39.7 -L¦¦¦¦¦¦¦ Packet type 7 (0x07) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 5 8 210 69 003f 41.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 6 23 96 5 002d 28.0 --Max: 167.570Lat: 132.129Time: 0.075 MS: 02 - - 7 6 43 3 002d 23.1 -L¦¦¦¦¦¦¦ Packet type 9 (0x09) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 8 28 163 16 003f 39.8 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 9 0 0 0 0000 0.0 --SVs: 11 = 8 10 7 5 13 2 28 23 3 6 4 - -10 3 55 4 002d 24.7 -L¦¦¦¦¦¦¦ Packet type 13 (0x0D) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -11 0 0 0 0000 0.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ L¦¦¦ Packet Type 4 (0x04) ¦¦¦--DGPS source: 1 (SBAS) Corrections: 12 - L¦¦¦¦¦¦¦ Packet type 27 (0x1B) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦-q

    Read the article

  • How do I merge multiple PDB files ?

    - by blue.tuxedo
    We are currently using a single command line tool to build our product on both Windows and Linux. Si far its works nicely, allowing us to build out of source and with finer dependencies than what any of our previous build system allowed. This buys us great incremental and parallel build capabilities. To describe shortly the build process, we get the usual: .cpp -- cl.exe --> .obj and .pdb multiple .obj and .pdb -- cl.exe --> single .dll .lib .pdb multiple .obj and .pdb -- cl.exe --> single .exe .pdb The msvc C/C++ compiler supports it adequately. Recently the need to build a few static libraries emerged. From what we gathered, the process to build a static library is: multiple .cpp -- cl.exe --> multiple .obj and a single .pdb multiple .obj -- lib.exe --> a single .lib The single .pdb means that cl.exe should only be executed once for all the .cpp sources. This single execution means that we can't parallelize the build for this static library. This is really unfortunate. We investigated a bit further and according to the documentation (and the available command line options): cl.exe does not know how to build static libraries lib.exe does not know how to build .pdb files Does anybody know a way to merge multiple PDB files ? Are we doomed to have slow builds for static libraries ? How do tools like Incredibuild work around this issue ?

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • How to make a wxFrame behave like a modal wxDialog object

    - by dagorym
    Is is possible to make a wxFrame object behave like a modal dialog box in that the window creating the wxFrame object stops execution until the wxFrame object exits? I'm working on a small game and have run into the following problem. I have a main program window that hosts the main application (strategic portion). Occasionally, I need to transfer control to a second window for resolution of part of the game (tactical portion). While in the second window, I want the processing in the first window to stop and wait for completion of the work being done in the second window. Normally a modal dialog would do the trick but I want the new window to have some functionality that I can't seem to get with a wxDialog, namely a status bar at the bottom and the ability to resize/maximize/minimize the window (this should be possible but doesn't work, see this question How to get the minimize and maximize buttons to appear on a wxDialog object). As an addition note, I want the second window's functionality needs to stay completely decoupled from the primary window as it will be spun off into a separate program eventually. Has anyone done this or have any suggestions?

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • PHP Database connection practice

    - by Phill Pafford
    I have a script that connects to multiple databases (Oracle, MySQL and MSSQL), each database connection might not be used each time the script runs but all could be used in a single script execution. My question is, "Is it better to connect to all the databases once in the beginning of the script even though all the connections might not be used. Or is it better to connect to them as needed, the only catch is that I would need to have the connection call in a loop (so the database connection would be new for X amount of times in the loop). Yeah Example Code #1: // Connections at the beginning of the script $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); for ($i=1; $i<=5; $i++) { // NOTE: might not use all the connections $rs = queryDb($query,$dbh_*); // $dbh can be any of the 3 connections } Yeah Example Code #2: // Connections in the loop for ($i=1; $i<=5; $i++) { // NOTE: Would use all the connections but connecting multiple times $dbh_oracle = connect2db(); $dbh_mysql = connect2db(); $dbh_mssql = connect2db(); $rs_oracle = queryDb($query,$dbh_oracle); $rs_mysql = queryDb($query,$dbh_mysql); $rs_mssql = queryDb($query,$dbh_mssql); } now I know you could use a persistent connection but would that be one connection open for each database in the loop as well? Like mysql_pconnect(), mssql_pconnect() and adodb for Oracle persistent connection method. I know that persistent connection can also be resource hogs and as I'm looking for best performance/practice.

    Read the article

  • Python Turtle Graphics, how to plot functions over an interval?

    - by TheDragonAce
    I need to plot a function over a specified interval. The function is f1, which is shown below in the code, and the interval is [-7, -3]; [-1, 1]; [3, 7] with a step of .01. When I execute the program, nothing is drawn. Any ideas? import turtle from math import sqrt wn = turtle.Screen() wn.bgcolor("white") wn.title("Plotting") mypen = turtle.Turtle() mypen.shape("classic") mypen.color("black") mypen.speed(10) while True: try: def f1(x): return 2 * sqrt((-abs(abs(x)-1)) * abs(3 - abs(x))/((abs(x)-1)*(3-abs(x)))) * \ (1 + abs(abs(x)-3)/(abs(x)-3))*sqrt(1-(x/7)**2)+(5+0.97*(abs(x-0.5)+abs(x+0.5))-\ 3*(abs(x-0.75)+abs(x+0.75)))*(1+abs(1-abs(x))/(1-abs(x))) mypen.penup() step=.01 startf11=-7 stopf11=-3 startf12=-1 stopf12=1 startf13=3 stopf13=7 def f11 (startf11,stopf11,step): rc=[] y = f1(startf11) while y<=stopf11: rc.append(startf11) #y+=step mypen.setpos(f1(startf11)*25,y*25) mypen.dot() def f12 (startf12,stopf12,step): rc=[] y = f1(startf12) while y<=stopf12: rc.append(startf12) #y+=step mypen.setpos(f1(startf12)*25, y*25) mypen.dot() def f13 (startf13,stopf13,step): rc=[] y = f1(startf13) while y<=stopf13: rc.append(startf13) #y+=step mypen.setpos(f1(startf13)*25, y*25) mypen.dot() f11(startf11,stopf11,step) f12(startf12,stopf12,step) f13(startf13,stopf13,step) except ZeroDivisionError: continue

    Read the article

  • What would you write in a consitution (law book) for a programmers' country?

    - by Developer Art
    After we have got our great place to talk about professional matters and sozialize online - on SO, I believe the next logical step would be to found our own country! I invite you all to participate and bring together your available resources. We will buy an island, better even a group of island so that we could establish states like .NET territories, Java land, Linux republic etc. We will build a society of programmers (girls - we need you too). To the organizational side, we're going to need some constitution or a law book. I suggest we write it together. I make it a wiki as it should be a cooperative effort. I'll open the work. Section 1. Programmers' rights. Every citizen has a right to an Internet connection 24/7. Every citizen can freely choose the field of interest Section 2. Programmers' obligations. Every citizen must embrace the changing nature of the profession and constanly educate himself Section 3. Law enforcement. Code duplication when can be avoided is punished by limiting the bandwith speed to 64Kbit for a period of one week. Using ugly hacks instead of refactoring code is punished by cutting the Internet connection for a period of one month. Usage of technologies older than 5/10 years is punished by restricting the web access to the sites last updated 5/10 years ago for a period of one month. Please feel free to modify and extend the list. We'll need to have it ready before we proceed formally with the country foundation. A purchase fund will be established shortly. Everyone is invited to participate.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Problem with Mergesort in C++

    - by cambr
    vector<int>& mergesort(vector<int> &a) { if (a.size() == 1) return a; int middle = a.size() / 2; vector<int>::const_iterator first = a.begin(); vector<int>::const_iterator mid = a.begin() + (middle - 1); vector<int>::const_iterator last = a.end(); vector<int> ll(first, mid); vector<int> rr(mid, last); vector<int> l = mergesort(ll); vector<int> r = mergesort(rr); vector<int> result; result.reserve(a.size()); int dp = 0, lp = 0, rp = 0; while (dp < a.size()) { if (lp == l.size()) { result[dp] = (r[rp]); rp++; } else if (rp == r.size()) { result[dp] = (l[lp]); lp++; } else if (l[lp] < r[rp]) { result[dp] = (l[lp]); lp++; } else { result[dp] = (r[rp]); rp++; } dp++; } a = result; return a; } It compiles correctly but while execution, I am getting: This application has requested the runtime to end it in an unusual way. This is a weird error. Is there something that is fundamentally wrong with the code?

    Read the article

  • Using unset member variables within a class or struct

    - by Doug Kavendek
    It's pretty nice to catch some really obvious errors when using unset local variables or when accessing a class or struct's members directly prior to initializing them. In visual studio 2008 you get an "uninitialized local variable used" warning at compile-time and get a run-time check failure at the point of access when debugging. However, if you access an uninitialized struct's member variable through one of its functions, you don't get any warnings or assertions. Obviously the easiest solution is don't do that, but nobody's perfect. For example: struct Test { float GetMember() const { return member; } float member; }; Test test; float f1 = test.member; // Raises warning, asserts in VS debugger at runtime float f2 = test.GetMember(); // No problem, just keeps on going This surprised me, but it makes some sense -- the compiler can't assume calling a function on an unused struct is an error, or how else would you initialize or construct it? And anything fancier just quickly brings up so many other complications that it makes sense that it wouldn't bother classifying which functions are ok to call and when, especially just as a debugging help. I know I can set up my own assertions or error checking within the class itself, but that can complicate some simpler structs. Still, it would seem like within the context of the function call, wouldn't it know insides GetMember() that member wasn't initialized yet? I'm assuming it's not only relying on static compile-time deduction, given the Run-Time Check Failure #3 it raises during execution, so based on my current understanding of it it would seem reasonable for the same checks to apply. Is this just a limitation of this specific compiler/debugger (Visual Studio 2008), or more tied to how C++ works?

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • Dynamic expression tree how to

    - by Savvas Sopiadis
    Hello everybody! Implemented a generic repository with several Methods. One of those is this: public IEnumerable<T> Find(Expression<Func<T, bool>> where) { return _objectSet.Where(where); } Given to be it is easy to call this like this: Expression<Func<Culture, bool>> whereClause = c => c.CultureId > 4 ; return cultureRepository.Find(whereClause).AsQueryable(); But now i see (realize) that this kind of quering is "limiting only to one criteria". What i would like to do is this: in the above example c is of type Culture. Culture has several properties like CultureId, Name, Displayname,... How would i express the following: CultureId 4 and Name.contains('de') and in another execution Name.contains('us') and Displayname.contains('ca') and .... Those queries should be created dynamically. I had a look in Expression trees (as i thought this to be a solution to my problem - btw i never used them before) but i cannot find anything which points to my requirement. How can this be costructed? Thanks in advance

    Read the article

  • pass a number value to a Timer, XML and AS3

    - by VideoDnd
    I want to pass a number value to a Timer. How do I do this? My number and integer values for other variables work fine. Error I get null object reference and coercion of value, because I'm not passing to 'timer' properly. I don't want to say my variable's a number, I want to say it has a number value. Variable //what I have now var timer:Timer; timer = new Timer(100); Path myXML.COUNT.text(); XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">100</TIMER> </SESSION> Parse and Load //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data);

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >