Search Results

Search found 3677 results on 148 pages for 'concurrent vector'.

Page 125/148 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • How to connect to bluetoothbee device using j2me?

    - by user1500412
    I developed a simple bluetooth connection application in j2me. I try it on emulator, both server and client can found each other, but when I deploy the application to blackberry mobile phone and connect to a bluetoothbee device it says service search no records. What could it be possibly wrong? is it j2me can not find a service in bluetoothbee? The j2me itself succeed to found the bluetoothbee device, but why it can not find the service? My code is below. What I don't understand is the UUID? how to set UUID for unknown source? since I didn't know the UUID for the bluetoothbee device. class SearchingDevice extends Canvas implements Runnable,CommandListener,DiscoveryListener{ //...... public SearchingDevice(MenuUtama midlet, Display display){ this.display = display; this.midlet = midlet; t = new Thread(this); t.start(); timer = new Timer(); task = new TestTimerTask(); /*--------------------Device List------------------------------*/ select = new Command("Pilih",Command.OK,0); back = new Command("Kembali",Command.BACK,0); btDevice = new List("Pilih Device",Choice.IMPLICIT); btDevice.addCommand(select); btDevice.addCommand(back); btDevice.setCommandListener(this); /*------------------Input Form---------------------------------*/ formInput = new Form("Form Input"); nama = new TextField("Nama","",50,TextField.ANY); umur = new TextField("Umur","",50,TextField.ANY); measure = new Command("Ukur",Command.SCREEN,0); gender = new ChoiceGroup("Jenis Kelamin",Choice.EXCLUSIVE); formInput.addCommand(back); formInput.addCommand(measure); gender.append("Pria", null); gender.append("Wanita", null); formInput.append(nama); formInput.append(umur); formInput.append(gender); formInput.setCommandListener(this); /*---------------------------------------------------------------*/ findDevice(); } /*----------------Gambar screen searching device---------------------------------*/ protected void paint(Graphics g) { g.setColor(0,0,0); g.fillRect(0, 0, getWidth(), getHeight()); g.setColor(255,255,255); g.drawString("Mencari Device", 20, 20, Graphics.TOP|Graphics.LEFT); if(this.counter == 1){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); } if(this.counter == 2){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); g.setColor(100,255,255); g.fillRect(60, 80, 20, 40); } if(this.counter == 3){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); g.setColor(100,255,255); g.fillRect(60, 80, 20, 40); g.setColor(255,115,200); g.fillRect(100, 60, 20, 60); } if(this.counter == 4){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); g.setColor(100,255,255); g.fillRect(60, 80, 20, 40); g.setColor(255,115,200); g.fillRect(100, 60, 20, 60); g.setColor(100,255,255); g.fillRect(140, 40, 20, 80); //display.callSerially(this); } } /*--------- Running Searching Screen ----------------------------------------------*/ public void run() { while(run){ this.counter++; if(counter > 4){ this.counter = 1; } try { Thread.sleep(1000); } catch (InterruptedException ex) { System.out.println("interrupt"+ex.getMessage()); } repaint(); } } /*-----------------------------cari device bluetooth yang -------------------*/ public void findDevice(){ try { devices = new java.util.Vector(); local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); local.setDiscoverable(DiscoveryAgent.GIAC); agent.startInquiry(DiscoveryAgent.GIAC, this); } catch (BluetoothStateException ex) { System.out.println("find device"+ex.getMessage()); } } /*-----------------------------jika device ditemukan--------------------------*/ public void deviceDiscovered(RemoteDevice rd, DeviceClass dc) { devices.addElement(rd); } /*--------------Selesai tes koneksi ke bluetooth server--------------------------*/ public void inquiryCompleted(int param) { switch(param){ case DiscoveryListener.INQUIRY_COMPLETED: //inquiry completed normally if(devices.size()>0){ //at least one device has been found services = new java.util.Vector(); this.findServices((RemoteDevice)devices.elementAt(0)); this.run = false; do_alert("Inquiry completed",4000); }else{ do_alert("No device found in range",4000); } break; case DiscoveryListener.INQUIRY_ERROR: do_alert("Inquiry error",4000); break; case DiscoveryListener.INQUIRY_TERMINATED: do_alert("Inquiry canceled",4000); break; } } /*-------------------------------Cari service bluetooth server----------------------------*/ public void findServices(RemoteDevice device){ try { // int[] attributes = {0x100,0x101,0x102}; UUID[] uuids = new UUID[1]; //alamat server uuids[0] = new UUID("F0E0D0C0B0A000908070605040302010",false); //uuids[0] = new UUID("8841",true); //menyiapkan device lokal local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); //mencari service dari server agent.searchServices(null, uuids, device, this); //server = (StreamConnectionNotifies)Connector.open(url.toString()); } catch (BluetoothStateException ex) { // ex.printStackTrace(); System.out.println("Errorx"+ex.getMessage()); } } /*---------------------------Pencarian service selesai------------------------*/ public void serviceSearchCompleted(int transID, int respCode) { switch(respCode){ case DiscoveryListener.SERVICE_SEARCH_COMPLETED: if(currentDevice == devices.size() - 1){ if(services.size() > 0){ this.run = false; display.setCurrent(btDevice); do_alert("Service found",4000); }else{ do_alert("The service was not found",4000); } }else{ currentDevice++; this.findServices((RemoteDevice)devices.elementAt(currentDevice)); } break; case DiscoveryListener.SERVICE_SEARCH_DEVICE_NOT_REACHABLE: do_alert("Device not Reachable",4000); break; case DiscoveryListener.SERVICE_SEARCH_ERROR: do_alert("Service search error",4000); break; case DiscoveryListener.SERVICE_SEARCH_NO_RECORDS: do_alert("No records return",4000); break; case DiscoveryListener.SERVICE_SEARCH_TERMINATED: do_alert("Inquiry canceled",4000); break; } } public void servicesDiscovered(int i, ServiceRecord[] srs) { for(int x=0; x<srs.length;x++) services.addElement(srs[x]); try { btDevice.append(((RemoteDevice)devices.elementAt(currentDevice)).getFriendlyName(false),null); } catch (IOException ex) { System.out.println("service discover"+ex.getMessage()); } } public void do_alert(String msg, int time_out){ if(display.getCurrent() instanceof Alert){ ((Alert)display.getCurrent()).setString(msg); ((Alert)display.getCurrent()).setTimeout(time_out); }else{ Alert alert = new Alert("Bluetooth"); alert.setString(msg); alert.setTimeout(time_out); display.setCurrent(alert); } } private String getData(){ System.out.println("getData"); String cmd=""; try { ServiceRecord service = (ServiceRecord)services.elementAt(btDevice.getSelectedIndex()); String url = service.getConnectionURL(ServiceRecord.NOAUTHENTICATE_NOENCRYPT, false); conn = (StreamConnection)Connector.open(url); DataInputStream in = conn.openDataInputStream(); int i=0; timer.schedule(task, 15000); char c1; while(time){ //while(((c1 = in.readChar())>0) && (c1 != '\n')){ //while(((c1 = in.readChar())>0) ){ c1 = in.readChar(); cmd = cmd + c1; //System.out.println(c1); // } } System.out.print("cmd"+cmd); if(time == false){ in.close(); conn.close(); } } catch (IOException ex) { System.err.println("Cant read data"+ex); } return cmd; } //timer task fungsinya ketika telah mencapai waktu yg dijadwalkan putus koneksi private static class TestTimerTask extends TimerTask{ public TestTimerTask() { } public void run() { time = false; } } }

    Read the article

  • PostgreSQL insert on primary key failing with contention, even at serializable level

    - by Steven Schlansker
    I'm trying to insert or update data in a PostgreSQL db. The simplest case is a key-value pairing (the actual data is more complicated, but this is the smallest clear example) When you set a value, I'd like it to insert if the key is not there, otherwise update. Sadly Postgres does not have an insert or update statement, so I have to emulate it myself. I've been working with the idea of basically SELECTing whether the key exists, and then running the appropriate INSERT or UPDATE. Now clearly this needs to be be in a transaction or all manner of bad things could happen. However, this is not working exactly how I'd like it to - I understand that there are limitations to serializable transactions, but I'm not sure how to work around this one. Here's the situation - ab: => set transaction isolation level serializable; a: => select count(1) from table where id=1; --> 0 b: => select count(1) from table where id=1; --> 0 a: => insert into table values(1); --> 1 b: => insert into table values(1); --> ERROR: duplicate key value violates unique constraint "serial_test_pkey" Now I would expect it to throw the usual "couldn't commit due to concurrent update" but I'm guessing since the inserts are different "rows" this does not happen. Is there an easy way to work around this?

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • Sending jQuery.ajax data simultaneous to a form submit

    - by dscher
    I have a bit of a conundrum. I have a form which has numerous fields. There is one field for links where you enter a link, click an add button, and the link(using jQuery) gets added to a link_array. I want this array to be sent via the jQuery.ajax method when the form is submitted. If I send the link_array using $.ajax like this: $.ajax({ type: "POST", url: "add_stock", dataType: "json", data: { "links": link_array } }); when the add link button is selected the data goes no problem to the correct place and gets put in the db correctly. If I bind the above function to the submit form button using $(#stock_form).submit(..... then the rest of the form data is sent but not the link_array. I can obviously pass the link array back into a hidden field in HTML but then I'd have to unpack the array into comma separate values and break the comma-separated string apart in PHP. It just seems 100X easier to unpack the Javascript array in PHP without an other fuss. So, how is it that you can send an array from javascript using $.ajax concurrent to the rest of the $_POST data in HTML? Please note that I'm using Kohana 3.0 framework but really that shouldn't make a difference, what I want to do is add this js array to the $_POST array that is already going. Thanks!

    Read the article

  • Can't start the portable version of NetBeans 6.9.1 IDE

    - by Coder
    I downloaded the portable version of netbeans netbeans-6.9.1-201007282301-ml.zip from the netbeans site and changed the config file in etc/netbeans.conf as indicated on the netbeans site. The file contents are below. # ${HOME} will be replaced by JVM user.home system property #netbeans_default_userdir="${HOME}/.netbeans/6.9" netbeans_default_userdir=".netbeans/6.9" # Options used by NetBeans launcher by default, can be overridden by explicit # command line switches: netbeans_default_options="-J-client -J-Xss2m -J-Xms32m -J-XX:PermSize=32m -J-XX:MaxPermSize=200m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true" # Note that a default -Xmx is selected for you automatically. # You can find this value in var/log/messages.log file in your userdir. # The automatically selected value can be overridden by specifying -J-Xmx here # or on the command line. # If you specify the heap size (-Xmx) explicitely, you may also want to enable # Concurrent Mark & Sweep garbage collector. In such case add the following # options to the netbeans_default_options: # -J-XX:+UseConcMarkSweepGC -J-XX:+CMSClassUnloadingEnabled -J-XX:+CMSPermGenSweepingEnabled # (see http://wiki.netbeans.org/wiki/view/FaqGCPauses) # Default location of JDK, can be overridden by using --jdkhome <dir>: #netbeans_jdkhome="/path/to/jdk" netbeans_jdkhome="C:\Program Files\Java\jdk1.6.0_24\" # Additional module clusters, using ${path.separator} (';' on Windows or ':' on Unix): #netbeans_extraclusters="/absolute/path/to/cluster1:/absolute/path/to/cluster2" # If you have some problems with detect of proxy settings, you may want to enable # detect the proxy settings provided by JDK5 or higher. # In such case add -J-Djava.net.useSystemProxies=true to the netbeans_default_options. But it refuses to start when i try to run it. If i change the JDK path to something incorrect it complains that it can't find the jdk so i think the jdk path is correct. It also creates a .netbeans directory when i try to start it. I don't see any errors and it just doesn't do anything else observable. Does anybody know how to set up this version of netbeans? Thanks.

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • Advice on a DB that can be uploaded to a website by a smart client for collecting survey feedback

    - by absfabs
    Hello, I'm hoping you can help. I'm looking for a zero config multi-user datbase that my winforms application can easily upload to a webserver folder (together with 1 or 2 classic asp pages) and am looking for some suggestions/recommendations. The idea is that the database will be used to collect feedback entered by people filling in the asp pages. The pages will write to the database using javascript. The database will subsequently be downloaded again for processing once the responses are in. In Summary: It will mostly run in MS Windows environments. I have a modest budget for this and do not mind paying for such a database. No runtime licensing costs. Should be xcopy - Once uploaded to a website folder it should be operational. It should not have a dotnet CLR dependency. It should support a resonable level of concurrent access. Average respondent count would be around 20-30 but one never knows. Should be a reasonable size so that uploads/downloads to and from the site will be reasonably fast. Would appreciate your suggestions/comments Many thanks Abz To clarify - this is a desktop commercial application for feedback management in a vertical market. It uses SQL Server as the backing store. The application currently provides feedback management from email and paper feedback. I now want to add web feedback capability. Getting users to to make their SQL servers accessible to a website is not at option at this time as I am want to make getting up and running as painless as possible. I intend to release a web based implementation of the software in the near future but for now am looking at the above as a pragmatic way to provide web based feedback collection.

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • Biztalk vs API for databroker layer

    - by jdt199
    My company is about to undergo a large project in which our client wants a large customer portal with a cms, crm implementing. This will require interaction with data from multiple sources across our customers business, these sources include XML office backend systems, sql datbases, webservices etc. Our proposed solution would be to write an API in c# to provide a common interface with all these systems. This would be scalable for future and concurrent projects within the company. Our client expressed an interest in using Biztalk rather than a custom API for this integration, as they feel it is an enterprise solution that any of their suppliers could pick up and use, and it will be better supported. We feel that the configuration work using Biztalk would be rather heavy for all their custom business rules which are required and an interface for the new application to get data to and from Biztalk would still need to be written. Are we right to prefer a custom API solution above Biztalk? Would Biztalk be suitable as a databroker layer to provide an interface for the new Customer portal we are writing. We have not experience with using Biztalk before so any input would be appreciated.

    Read the article

  • parallelizing code using openmp

    - by anubhav
    Hi, The function below contains nested for loops. There are 3 of them. I have given the whole function below for easy understanding. I want to parallelize the code in the innermost for loop as it takes maximum CPU time. Then i can think about outer 2 for loops. I can see dependencies and internal inline functions in the innermost for loop . Can the innermost for loop be rewritten to enable parallelization using openmp pragmas. Please tell how. I am writing just the loop which i am interested in first and then the full function where this loop exists for referance. Interested in parallelizing the loop mentioned below. //* LOOP WHICH I WANT TO PARALLELIZE *// for (y = 0; y < 4; y++) { refptr = PelYline_11 (ref_pic, abs_y++, abs_x, img_height, img_width); LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; } The full function where this loop exists is below for referance. /*! *********************************************************************** * \brief * Setup the fast search for an macroblock *********************************************************************** */ void SetupFastFullPelSearch (short ref, int list) // <-- reference frame parameter, list0 or 1 { short pmv[2]; pel_t orig_blocks[256], *orgptr=orig_blocks, *refptr, *tem; // created pointer tem int offset_x, offset_y, x, y, range_partly_outside, ref_x, ref_y, pos, abs_x, abs_y, bindex, blky; int LineSadBlk0, LineSadBlk1, LineSadBlk2, LineSadBlk3; int max_width, max_height; int img_width, img_height; StorablePicture *ref_picture; pel_t *ref_pic; int** block_sad = BlockSAD[list][ref][7]; int search_range = max_search_range[list][ref]; int max_pos = (2*search_range+1) * (2*search_range+1); int list_offset = ((img->MbaffFrameFlag)&&(img->mb_data[img->current_mb_nr].mb_field))? img->current_mb_nr%2 ? 4 : 2 : 0; int apply_weights = ( (active_pps->weighted_pred_flag && (img->type == P_SLICE || img->type == SP_SLICE)) || (active_pps->weighted_bipred_idc && (img->type == B_SLICE))); ref_picture = listX[list+list_offset][ref]; //===== Use weighted Reference for ME ==== if (apply_weights && input->UseWeightedReferenceME) ref_pic = ref_picture->imgY_11_w; else ref_pic = ref_picture->imgY_11; max_width = ref_picture->size_x - 17; max_height = ref_picture->size_y - 17; img_width = ref_picture->size_x; img_height = ref_picture->size_y; //===== get search center: predictor of 16x16 block ===== SetMotionVectorPredictor (pmv, enc_picture->ref_idx, enc_picture->mv, ref, list, 0, 0, 16, 16); search_center_x[list][ref] = pmv[0] / 4; search_center_y[list][ref] = pmv[1] / 4; if (!input->rdopt) { //--- correct center so that (0,0) vector is inside --- search_center_x[list][ref] = max(-search_range, min(search_range, search_center_x[list][ref])); search_center_y[list][ref] = max(-search_range, min(search_range, search_center_y[list][ref])); } search_center_x[list][ref] += img->opix_x; search_center_y[list][ref] += img->opix_y; offset_x = search_center_x[list][ref]; offset_y = search_center_y[list][ref]; //===== copy original block for fast access ===== for (y = img->opix_y; y < img->opix_y+16; y++) for (x = img->opix_x; x < img->opix_x+16; x++) *orgptr++ = imgY_org [y][x]; //===== check if whole search range is inside image ===== if (offset_x >= search_range && offset_x <= max_width - search_range && offset_y >= search_range && offset_y <= max_height - search_range ) { range_partly_outside = 0; PelYline_11 = FastLine16Y_11; } else { range_partly_outside = 1; } //===== determine position of (0,0)-vector ===== if (!input->rdopt) { ref_x = img->opix_x - offset_x; ref_y = img->opix_y - offset_y; for (pos = 0; pos < max_pos; pos++) { if (ref_x == spiral_search_x[pos] && ref_y == spiral_search_y[pos]) { pos_00[list][ref] = pos; break; } } } //===== loop over search range (spiral search): get blockwise SAD ===== **// =====THIS IS THE PART WHERE NESTED FOR STARTS=====** for (pos = 0; pos < max_pos; pos++) // OUTERMOST FOR LOOP { abs_y = offset_y + spiral_search_y[pos]; abs_x = offset_x + spiral_search_x[pos]; if (range_partly_outside) { if (abs_y >= 0 && abs_y <= max_height && abs_x >= 0 && abs_x <= max_width ) { PelYline_11 = FastLine16Y_11; } else { PelYline_11 = UMVLine16Y_11; } } orgptr = orig_blocks; bindex = 0; for (blky = 0; blky < 4; blky++) // SECOND FOR LOOP { LineSadBlk0 = LineSadBlk1 = LineSadBlk2 = LineSadBlk3 = 0; for (y = 0; y < 4; y++) //INNERMOST FOR LOOP WHICH I WANT TO PARALLELIZE { refptr = PelYline_11 (ref_pic, abs_y++, abs_x, img_height, img_width); LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; } block_sad[bindex++][pos] = LineSadBlk0; block_sad[bindex++][pos] = LineSadBlk1; block_sad[bindex++][pos] = LineSadBlk2; block_sad[bindex++][pos] = LineSadBlk3; } } //===== combine SAD's for larger block types ===== SetupLargerBlocks (list, ref, max_pos); //===== set flag marking that search setup have been done ===== search_setup_done[list][ref] = 1; } #endif // _FAST_FULL_ME_

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • I am confused -- Will this code always work?

    - by Shekhar
    Hello, I have written this piece of code public class Test{ public static void main(String[] args) { List<Integer> list = new ArrayList<Integer>(); for(int i = 1;i<= 4;i++){ new Thread(new TestTask(i, list)).start(); } while(list.size() != 4){ // this while loop required so that all threads complete their work } System.out.println("List "+list); } } class TestTask implements Runnable{ private int sequence; private List<Integer> list; public TestTask(int sequence, List<Integer> list) { this.sequence = sequence; this.list = list; } @Override public void run() { list.add(sequence); } } This code works and prints all the four elements of list on my machine. My question is that will this code always work. I think there might be a issue in this code when two/or more threads add element to this list at the same point. In that case it while loop will never end and code will fail. Can anybody suggest a better way to do this? I am not very good at multithreading and don't know which concurrent collection i can use? Thanks Shekhar

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Opaque tenant identification with SQL Server & NHibernate

    - by Anton Gogolev
    Howdy! We're developing a nowadays-fashionable multi-tenanted SaaS app (shared database, shared schema), and there's one thing I don't like about it: public class Domain : BusinessObject { public virtual long TenantID { get; set; } public virtual string Name { get; set; } } The TenantID is driving me nuts, as it has to be accounted for almost everywhere, and it's a hassle from security standpoint: what happens if a malicious API user changes TenantID to some other value and will mix things up. What I want to do is to get rid of this TenantID in our domain objects altogether, and to have either NHibernate or SQL Server deal with it. From what I've already read on the Internets, this can be done with CONTEXT_INFO (here's a NHibernatebased implementation), NHibernate filters, SQL Views and with combination thereof. Now, my requirements are as follows: Remove any mentions of TenantID from domain objects ...but have SQL Server insert it where appropriate (I guess this is achieved with default constraints) ...and obviously provide support for filtering based on this criteria, so that customers will never see each other's data If possible, avoid SQL Server views. Have a solution which plays nicely with NHibernate, SQL Servers' MARS and general nature of SaaS apps being highly concurrent What are your thoughts on that?

    Read the article

  • Special technology needed for browser based chat?

    - by orokusaki
    On this post, I read about the usage of XMPP. Is this sort of thing necessary, and more importantly, my main question expanded: Can a chat server and client be built efficiently using only standard HTTP and browser technologies (such as PHP and JS, or RoR and JS, etc)? Or, is it best to stick with old protocols like XMPP find a way to integrate them with my application? I looked into CampFire via LiveHTTPHeaders and Firebug for about 5 minutes, and it appears to use Ajax to send a request which is never answered until another chat happens. Is this just CampFire opening a new thread on the server to listen for an update and then returning a response to the request when the thread hears an update? I noticed that they're requesting on a specific port (8043 if memory serves me) which makes me think that they're doing something more complex than just what I mentioned. Also, the URL requested started with /tcp/ which I found interesting. Note: I don't expect to ever have more than 150 users live-chatting in all the rooms combined at the same time. I understand that if I was building a hosted pay for chat service like CampFire with thousands of concurrent users, it would behoove me to invest time in researching special technologies vs trying to reinvent the wheel in a simple way in my app. Also, if you're going to do it with server polling, how often would you personally poll to maximize response without slamming the server?

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Using glDrawElements does not draw my .obj file

    - by Hallik
    I am trying to correctly import an .OBJ file from 3ds Max. I got this working using glBegin() & glEnd() from a previous question on here, but had really poor performance obviously, so I am trying to use glDrawElements now. I am importing a chessboard, its game pieces, etc. The board, each game piece, and each square on the board is stored in a struct GroupObject. The way I store the data is like this: struct Vertex { float position[3]; float texCoord[2]; float normal[3]; float tangent[4]; float bitangent[3]; }; struct Material { float ambient[4]; float diffuse[4]; float specular[4]; float shininess; // [0 = min shininess, 1 = max shininess] float alpha; // [0 = fully transparent, 1 = fully opaque] std::string name; std::string colorMapFilename; std::string bumpMapFilename; std::vector<int> indices; int id; }; //A chess piece or square struct GroupObject { std::vector<Material *> materials; std::string objectName; std::string groupName; int index; }; All vertices are triangles, so there are always 3 points. When I am looping through the faces f section in the obj file, I store the v0, v1, & v2 in the Material-indices. (I am doing v[0-2] - 1 to account for obj files being 1-based and my vectors being 0-based. So when I get to the render method, I am trying to loop through every object, which loops through every material attached to that object. I set the material information and try and use glDrawElements. However, the screen is black. I was able to draw the model just fine when I looped through each distinct material with all the indices associated with that material, and it drew the model fine. This time around, so I can use the stencil buffer for selecting GroupObjects, I changed up the loop, but the screen is black. Here is my render loop. The only thing I changed was the for loop(s) so they go through each object, and each material in the object in turn. void GLEngine::drawModel() { glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Vertex arrays setup glEnableClientState( GL_VERTEX_ARRAY ); glVertexPointer(3, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->position); glEnableClientState( GL_NORMAL_ARRAY ); glNormalPointer(GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->normal); glClientActiveTexture( GL_TEXTURE0 ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glTexCoordPointer(2, GL_FLOAT, model.getVertexSize(), model.getVertexBuffer()->texCoord); glUseProgram(blinnPhongShader); objects = model.getObjects(); // Loop through objects... for( int i=0 ; i < objects.size(); i++ ) { ModelOBJ::GroupObject *object = objects[i]; // Loop through materials used by object... for( int j=0 ; j<object->materials.size() ; j++ ) { ModelOBJ::Material *pMaterial = object->materials[j]; glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, pMaterial->ambient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, pMaterial->diffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, pMaterial->specular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, pMaterial->shininess * 128.0f); // Draw faces, letting OpenGL loop through them glDrawElements( GL_TRIANGLES, pMaterial->indices.size(), GL_UNSIGNED_INT, &pMaterial->indices ); } } if (model.hasNormals()) glDisableClientState(GL_NORMAL_ARRAY); if (model.hasTextureCoords()) { glClientActiveTexture(GL_TEXTURE0); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } if (model.hasPositions()) glDisableClientState(GL_VERTEX_ARRAY); glBindTexture(GL_TEXTURE_2D, 0); glUseProgram(0); glDisable(GL_BLEND); } I don't know what I am missing that's important. If it's also helpful, here is where I read a 'f' face line and store the info in the obj importer in the pMaterial-indices. else if (sscanf(buffer, "%d/%d/%d", &v[0], &vt[0], &vn[0]) == 3) // v/vt/vn { fscanf(pFile, "%d/%d/%d", &v[1], &vt[1], &vn[1]); fscanf(pFile, "%d/%d/%d", &v[2], &vt[2], &vn[2]); v[0] = (v[0] < 0) ? v[0] + numVertices - 1 : v[0] - 1; v[1] = (v[1] < 0) ? v[1] + numVertices - 1 : v[1] - 1; v[2] = (v[2] < 0) ? v[2] + numVertices - 1 : v[2] - 1; currentMaterial->indices.push_back(v[0]); currentMaterial->indices.push_back(v[1]); currentMaterial->indices.push_back(v[2]); Again, this worked drawing it all together only separated by materials, so I haven't changed code anywhere else except added the indices to the materials within objects, and the loop in the draw method. Before everything was showing up black, now with the setup as above, I am getting an unhandled exception write violation on the glDrawElements line. I did a breakpoint there, and there are over 600 elements in the pMaterial-indices array, so it's not empty, it has indices to use. When I set the glDrawElements like this, it gives me the black screen but no errors glDrawElements( GL_TRIANGLES, pMaterial->indices.size(), GL_UNSIGNED_INT, &pMaterial->indices[0] ); I have also tried adding this when I loop through the faces on import if ( currentMaterial->startIndex == -1 ) currentMaterial->startIndex = v[0]; currentMaterial->triangleCount++; And when drawing... //in draw method glDrawElements( GL_TRIANGLES, pMaterial->triangleCount * 3, GL_UNSIGNED_INT, model.getIndexBuffer() + pMaterial->startIndex );

    Read the article

  • Learning Haskell maps, folds, loops and recursion

    - by Darknight
    I've only just dipped my toe in the world of Haskell as part of my journey of programming enlightenment (moving on from, procedural to OOP to concurrent to now functional). I've been trying an online Haskell Evaluator. However I'm now stuck on a problem: Create a simple function that gives the total sum of an array of numbers. In a procedural language this for me is easy enough (using recursion) (c#) : private int sum(ArrayList x, int i) { if (!(x.Count < i + 1)) { int t = 0; t = x.Item(i); t = sum(x, i + 1) + t; return t; } } All very fine however my failed attempt at Haskell was thus: let sum x = x+sum in map sum [1..10] this resulted in the following error (from that above mentioned website): Occurs check: cannot construct the infinite type: a = a -> t Please bear in mind I've only used Haskell for the last 30 minutes! I'm not looking simply for an answer but a more explanation of it. Thanks in advanced.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >