Search Results

Search found 8923 results on 357 pages for 'core dumped'.

Page 298/357 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • How does the socket API accept() function work?

    - by Eli Bendersky
    The socket API is the de-facto standard for TCP/IP and UDP/IP communications (that is, networking code as we know it). However, one of its core functions, accept() is a bit magical. To borrow a semi-formal definition: accept() is used on the server side. It accepts a received incoming attempt to create a new TCP connection from the remote client, and creates a new socket associated with the socket address pair of this connection. In other words, accept returns a new socket through which the server can communicate with the newly connected client. The old socket (on which accept was called) stays open, on the same port, listening for new connections. How does accept work? How is it implemented? There's a lot of confusion on this topic. Many people claim accept opens a new port and you communicate with the client through it. But this obviously isn't true, as no new port is opened. You actually can communicate through the same port with different clients, but how? When several threads call recv on the same port, how does the data know where to go? I guess it's something along the lines of the client's address being associated with a socket descriptor, and whenever data comes through recv it's routed to the correct socket, but I'm not sure. It'd be great to get a thorough explanation of the inner-workings of this mechanism.

    Read the article

  • Hadoop reduce task gets hung

    - by user806098
    I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes. The job tracker log of master shows messages like this: 2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476' And the name node log of master shows messages like this: 2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call register(DatanodeRegistration(202.106.199.39:50010, storageID=DS-1989397900-202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown datanode 202.106.199.3 9:50010 However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned. But I checked my config, it goes like this: /etc/hosts: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.225.16 master 192.168.225.66 slave1 192.168.225.20 slave5 192.168.225.17 slave17 conf/core-site.xml: hadoop.tmp.dir /root/hadoop_tmp/hadoop_${user.name} fs.default.name hdfs://master:54310 io.sort.mb 1024 hdfs-site.xml: dfs.replication 3 masters: master slaves: master slave1 slave5 slave17 Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.

    Read the article

  • Getting segmentaion fault after destructor

    - by therealsquiggy
    I'm making a small file reading and data validation program as part of my TAFE (a tertiary college) course, This includes checking and validating dates. I decided that it would be best done with a seperate class, rather than integrating it into my main driver class. The problem is that I'm getting a segmentation fault(core dumped) after my test program runs. Near as I can tell, the error occurs when the program terminates, popping up after the destructor is called. So far I have had no luck finding the cause of this fault, and was hoping that some enlightened soul might show me the error of my ways. date.h #ifndef DATE_H #define DATE_H #include <string> using std::string; #include <sstream> using std::stringstream; #include <cstdlib> using std::exit; #include <iostream> using std::cout; using std::endl; class date { public: explicit date(); ~date(); bool before(string dateIn1, string dateIn2); int yearsBetween(string dateIn1, string dateIn2); bool isValid(string dateIn); bool getDate(int date[], string dateIn); bool isLeapYear(int year); private: int days[]; }; #endif date.cpp #include "date.h" date::date() { days[0] = 31; days[1] = 28; days[2] = 31; days[3] = 30; days[4] = 31; days[5] = 30; days[6] = 31; days[7] = 31; days[8] = 30; days[9] = 31; days[10] = 30; days[11] = 31; } bool date::before(string dateIn1, string dateIn2) { int date1[3]; int date2[3]; getDate(date1, dateIn1); getDate(date2, dateIn2); if (date1[2] < date2[2]) { return true; } else if (date1[1] < date2[1]) { return true; } else if (date1[0] < date2[0]) { return true; } return false; } date::~date() { cout << "this is for testing only, plox delete\n"; } int date::yearsBetween(string dateIn1, string dateIn2) { int date1[3]; int date2[3]; getDate(date1, dateIn1); getDate(date2, dateIn2); int years = date2[2] - date1[2]; if (date1[1] > date2[1]) { years--; } if ((date1[1] == date2[1]) && (date1[0] > date2[1])) { years--; } return years; } bool date::isValid(string dateIn) { int date[3]; if (getDate(date, dateIn)) { if (date[1] <= 12) { int extraDay = 0; if (isLeapYear(date[2])) { extraDay++; } if ((date[0] + extraDay) <= days[date[1] - 1]) { return true; } } } else { return false; } } bool date::getDate(int date[], string dateIn) { string part1, part2, part3; size_t whereIs, lastFound; whereIs = dateIn.find("/"); part1 = dateIn.substr(0, whereIs); lastFound = whereIs + 1; whereIs = dateIn.find("/", lastFound); part2 = dateIn.substr(lastFound, whereIs - lastFound); lastFound = whereIs + 1; part3 = dateIn.substr(lastFound, 4); stringstream p1(part1); stringstream p2(part2); stringstream p3(part3); if (p1 >> date[0]) { if (p2>>date[1]) { return (p3>>date[2]); } else { return false; } return false; } } bool date::isLeapYear(int year) { return ((year % 4) == 0); } and Finally, the test program #include <iostream> using std::cout; using std::endl; #include "date.h" int main() { date d; cout << "1/1/1988 before 3/5/1990 [" << d.before("1/1/1988", "3/5/1990") << "]\n1/1/1988 before 1/1/1970 [" << d.before("a/a/1988", "1/1/1970") <<"]\n"; cout << "years between 1/1/1988 and 1/1/1998 [" << d.yearsBetween("1/1/1988", "1/1/1998") << "]\n"; cout << "is 1/1/1988 valid [" << d.isValid("1/1/1988") << "]\n" << "is 2/13/1988 valid [" << d.isValid("2/13/1988") << "]\n" << "is 32/12/1988 valid [" << d.isValid("32/12/1988") << "]\n"; cout << "blerg\n"; } I've left in some extraneous cout statements, which I've been using to try and locate the error. I thank you in advance.

    Read the article

  • Replace .sln with MSBuild and wrap contained projects into targets

    - by Filburt
    I'd like to create a MSBuild project that reflects the project dependencies in a solution and wraps the VS projects inside reusable targets. The problem I like solve doing this is to svn-export, build and deploy a specific assembly (and its dependencies) in an BizTalk application. My question is: How can I make the targets for svn-exporting, building and deploying reusable and also reuse the wrapped projects when they are built for different dependencies? I know it would be simpler to just build the solution and deploy only the assemblies needed but I'd like to reuse the targets as much as possible. The parts The project I like to deploy <Project DefaultTargets="Deploy" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <ExportRoot Condition="'$(Export)'==''">Export</ExportRoot> </PropertyGroup> <Target Name="Clean_Export"> <RemoveDir Directories="$(ExportRoot)\My.Project.Dir" /> </Target> <Target Name="Export_MyProject"> <Exec Command="svn export svn://xxx/trunk/Biztalk2009/MyProject.btproj --force" WorkingDirectory="$(ExportRoot)" /> </Target> <Target Name="Build_MyProject" DependsOnTargets="Export_MyProject"> <MSBuild Projects="$(ExportRoot)\My.Project.Dir\MyProject.btproj" Targets="Build" Properties="Configuration=Release"></MSBuild> </Target> <Target Name="Deploy_MyProject" DependsOnTargets="Build_MyProject"> <Exec Command="BTSTask AddResource -ApplicationName:CORE -Source:MyProject.dll" /> </Target> </Project> The projects it depends upon look almost exactly like this (other .btproj and .csproj).

    Read the article

  • Designing a general database interface in PHP

    - by lamas
    I'm creating a small framework for my web projects in PHP so I don't have to do the basic work over and over again for every new website. It is not my goal to create a second CakePHP or Codeigniter and I'm also not planning to build my websites with any of the available frameworks as I prefer to use things I've created myself in general. I have no problems in designing that framework when it comes to parts like the core structure, request handling, and so on but I'm getting stuck with designing the database interface for my modules. I've already thought about using the MVC pattern but thought that it would be a bit of a overkill. So the exact problem I'm facing is how my frameworks modules (viewCustomers could be a module, for example) should interact with the database. Is it a good idea to write SQL directly in PHP (mysql_query( 'SELECT firstname, lastname(.....))? How could I abstract a query like SELECT firstname, lastname FROM customers WHERE id=X Would MySQL helper functions like $this->db->get( array('firstname', 'lastname'), array('id'=>X) ) be a good idea? I suppose not because they actually make everything more complicated by requiring arrays to be created and passed. Is the Model pattern from MVC my only real option?

    Read the article

  • Problem with my Jquery loading in my Codeigniter views

    - by sico87
    Hello, I am working on a one page website that allows the users to add and remove pages from there navigation as and when they would like too, the way it works is that if the click 'Blog' on the main nav a 'Blog' section should appear on the page, if they then click 'News' the 'News' section should also be visible, however the way I have started to implement this it seems I can only have one section at a time, can my code be adpated to allow multiple sections to shown on the main page. Here is my code for the page that has the main menu and the users selections on it. <!DOCTYPE html> <head> <title>Development Site</title> <link rel="stylesheet" href="/media/css/reset.css" media="screen"/> <link rel="stylesheet" href="/media/css/generic.css" media="screen"/> <script type="text/javascript" src="/media/javascript/jquery-ui/js/jquery.js"></script> <script type="text/javascript" src="/media/javascript/jquery-ui/development-bundle/ui/ui.core.js"></script> <script type="text/javascript" src="/media/javascript/jquery-ui/development-bundle/ui/ui.accordion.js"></script> <script type="text/javascript"> $(document).ready(function() { $('a.menuitem').click(function() { var link = $(this), url = link.attr("href"); $("#content_pane").load(url); return false; // prevent default link-behavior }); }); </script> </head> <body> <li><a class="menuitem" href="inspiration">Inspiration</a></li> <li><a class="menuitem" href="blog">Blog</a></li> <div id="content_pane"> </div> </body> </html>

    Read the article

  • JSF Templating without RAD

    - by Jeffrey Knight
    I have a set of jsp's based off a jtpl template. The template (jtpl file) looks like this: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <%-- tpl:metadata --%> <%-- jsf:codeBehind language="java" location="/JavaSource/pagecode/my.java" --%> <%-- /jsf:codeBehind --%> <%-- /tpl:metadata --%> <%@taglib uri="http://www.ibm.com/jsf/html_extended" prefix="hx"%> <%@taglib uri="http://java.sun.com/jsf/html" prefix="h"%> <HTML> <%@taglib uri="http://java.sun.com/jsf/core" prefix="f"%> <%@page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <f:view> <HEAD> ... Without using RAD, how can I change the jtpl (template) and regenerate jsp? I'm looking for a command line solution. Related question: is the jsp intended to be rendered from the template design-time in the IDE, or runtime on the server?

    Read the article

  • Case Insensitive Ternary Search Tree

    - by Yan Cheng CHEOK
    I had been using Ternary Search Tree for a while, as the data structure to implement a auto complete drop down combo box. Which means, when user type "fo", the drop down combo box will display foo food football The problem is, my current used of Ternary Search Tree is case sensitive. My implementation is as follow. It had been used by real world for around 1++ yeas. Hence, I consider it as quite reliable. My Ternary Search Tree code However, I am looking for a case insensitive Ternary Search Tree, which means, when I type "fo", the drop down combo box will show me foO Food fooTBall Here are some key interface for TST, where I hope the new case insentive TST may have similar interface too. /** * Stores value in the TernarySearchTree. The value may be retrieved using key. * @param key A string that indexes the object to be stored. * @param value The object to be stored in the tree. */ public void put(String key, E value) { getOrCreateNode(key).data = value; } /** * Retrieve the object indexed by key. * @param key A String index. * @return Object The object retrieved from the TernarySearchTree. */ public E get(String key) { TSTNode<E> node = getNode(key); if(node==null) return null; return node.data; } An example of usage is as follow. TSTSearchEngine is using TernarySearchTree as the core backbone. Example usage of Ternary Search Tree // There is stock named microsoft and MICROChip inside stocks ArrayList. TSTSearchEngine<Stock> engine = TSTSearchEngine<Stock>(stocks); // I wish it would return microsoft and MICROCHIP. Currently, it just return microsoft. List<Stock> results = engine.searchAll("micro");

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • EOFException in ObjectInputStream Only happens with Webstart not by java(w).exe ?!

    - by Houtman
    Hi, Anyone familiar with the differences in starting with Webstart(javaws.exe) compared to starting the app. using java.exe or javaw.exe regarding streams ? This is the exception which i ONLY get when using Webstart : java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(Unknown Source) at java.io.ObjectInputStream$BlockDataInputStream.readShort(Unknown Source) at java.io.ObjectInputStream.readStreamHeader(Unknown Source) at java.io.ObjectInputStream.<init>(Unknown Source) at fasttools.jtools.dss.api.core.remoting.thinclient.RemoteSocketChannel.<init>(RemoteSocketChannel.java:77) This is how i setup the connections on both sides //==Server side== //Thread{ Socket mClientSocket = cServSock.accept(); new DssServant(mClientSocket).start(); //} DssServant(Socket socket) throws DssException { try { OutputStream mOutputStream = new BufferedOutputStream( socket.getOutputStream() ); cObjectOutputStream = new ObjectOutputStream(mOutputStream); cObjectOutputStream.flush(); //publish streamHeader InputStream mInputStream = new BufferedInputStream( socket.getInputStream() ); cObjectInputStream = new ObjectInputStream(mInputStream); .. } catch (IOException e) { .. } .. } //==Client side== public RemoteSocketChannel(String host, int port, IEventDispatcher eventSubscriptionHandler) throws DssException { cHost = host; port = (port == 0 ? DssServer.PORT : port); try { cSocket = new Socket(cHost, port); OutputStream mOutputStream = new BufferedOutputStream( cSocket.getOutputStream() ); cObjectOut = new ObjectOutputStream(mOutputStream); cObjectOut.flush(); //publish streamHeader InputStream mInputStream = new BufferedInputStream( cSocket.getInputStream() ); cObjectIn = new ObjectInputStream(mInputStream); } catch (IOException e) { .. } .. } Thanks [EDIT] Webstart console says: Java Web Start 1.6.0_19 Using JRE version 1.6.0_19-b04 Java HotSpot(TM) Client VM Server is running same 1.6u19

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Is there a way to toggle bluetooth and/or wifi on and off programatically in iOS?

    - by Andy W
    I am looking for an easy way to toggle both bluetooth and wifi between on and off states on iOS 4.x devices (iPhone and iPad). I am constantly toggling these functions as I move between different locations and usage scenarios, and right now it takes multiple taps and visits to the Settings App. I am looking to create a simple App, that lives on Springboard, that I can just tap and it will turn off the wifi if it's on, and vice versa, then immediately quit. Similarly with an App for toggling bluetooth’s state. I have the developer SDK, and am comfortable in Xcode and with iOS development, so am happy to write the required Xcode to create the App. I am just at a loss as to which API, private or not, has the required functionality to simply toggle the state of these facilities. Because this is scratching a very personal itch, I have no intent to try and sell the App or get it up on the App store, so conforming with App guidelines on API usage is a non-issue. What I don’t want to do is jailbreak the devices, as I want to keep the core software as shipped. Can anyone point me at some sample code or more info on achieving this goal, as my Google-fu is letting me down, and if the information is out there for 4.x devices I just can’t find it.

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • Django: DatabaseLockError exception with Djapian

    - by jul
    Hi, I've got the exception shown below when executing indexer.update(). I have no idea about what to do: it used to work and now index database seems "locked". Anybody can help? Thanks Environment: Request Method: POST Request URL: http://piem.org:8000/restaurant/add/ Django Version: 1.1.1 Python Version: 2.5.2 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.comments', 'django.contrib.sites', 'django.contrib.admin', 'registration', 'djapian', 'resto', 'multilingual'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.locale.LocaleMiddleware', 'multilingual.middleware.DefaultLanguageMiddleware') Traceback: File "/var/lib/python-support/python2.5/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/home/jul/atable/../atable/resto/views.py" in addRestaurant 639. Restaurant.indexer.update() File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/indexer.py" in update 181. database = self._db.open(write=True) File "/home/jul/python-modules/Djapian-2.3.1-py2.5.egg/djapian/database.py" in open 20. xapian.DB_CREATE_OR_OPEN, File "/usr/lib/python2.5/site-packages/xapian.py" in __init__ 2804. _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) Exception Type: DatabaseLockError at /restaurant/add/ Exception Value: Unable to acquire database write lock on /home/jul/atable /djapian_spaces/resto/restaurant/resto.index.restaurantindexer: already locked

    Read the article

  • Career Choice in JEE, are EJBs standard?

    - by John Baker
    I have chosen to go the JEE route for a career path but I have been having a hard time determining which core technologies that I need to be most familiar with. I'm at the point I can write web apps without a problem with JSP's and regular servlets using JDBC or some basic Hibernate stuff (I know, HTML, CSS and have used MVC extensively on a number of different platforms). What I'm trying to find out is if there is some standard as far as J2EE technologies go. When I look at most of the Job listings, occasionally you will see someone mention Struts or Spring but rarely do I see any mention of EJB's. So my question is really, are EJB's basically required by most JEE employers? Or are most of them working with POJO's? Is it a mix? I have a hard time figuring out if I should put the majority of my time into Struts, Spring/Hibernate, EJB's, etc. And if I do need to master EJB's what version should I learn? 2.1 or 3.0. 3.0 has some obviously better features but I figure a lot of companies probably chose to write their apps in 2.1 just because it was the standard of the time and now migrating would be a big deal. Any advice on this is greatly appreciated.

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • jQuery update not replacing js files in Drupal 6.16.

    - by vr3690
    Hi, I am using jquery update in drupal 6.16 along with a lot of other modules. I am trying to use jquery ui 1.7.2 to render tabs. But unfortunately they don't work properly since jquery update is not replacing the jquery file (jquery 1.3.2). I checked the version using $.fn.jquery (in firebug) and got 1.2.6 (not 1.3.2 as required) as the result - and as expected the aggregated js file was using the 1.2.6 version of jquery (see source). earlier I had just replaced the core script files in /misc with the js files in sites/default/modules/jquery_update/replace folder (like you'd do in 5.x) and got the necessary result (i also renamed jquery.min.js to jquery.js ). now suddenly that stopped working after i upgraded to 6.x-2.0-alpha1 and also installed the mollom module. disabling/uninstalling mollom or down-grading jQuery update does not seem to help. the problem only occurs on the front page though. other content pages have jQuery 1.3.2 the problem can be seen here. So, basically, for some reason, jquery update is not replacing the jquery files (as it is supposed to) on the front page. and i cannot figure out why that happens. any ideas?

    Read the article

  • Concept of WNDCLASSEX, good programming habits and WndProc for system classes

    - by luiscubal
    I understand that the Windows API uses "classes", relying to the WNDCLASS/WNDCLASSEX structures. I have successfully gone through windows API Hello World applications and understand that this class is used by our own windows, but also by Windows core controls, such as "EDIT", "BUTTON", etc. I also understand that it is somehow related to WndProc(it allows me to define a function for it) Although I can find documentation about this class, I can't find anything explaining the concept. So far, the only thing I found about it was this: A Window Class has NOTHING to do with C++ classes. Which really doesn't help(it tells me what it isn't but doesn't tellme what it is). In fact, this only confuses me more, since I'd be tempted to associate WNDCLASSEX to C++ classes and think that "WNDCLASSEX" represents a control type . So, my first question is What is it? In second place, I understand that one can define a WndProc in a class. However, a window can also get messages from the child controls(or windows, or whatever they are called in the Windows API). How can this be? Finally, when is it a good programming practise to define a new class? Per application(for the main frame), per frame, one per control I define(if I create my own progress bar class, for example)? I know Java/Swing, C#/Windows.Form, C/GTK+ and C++/wxWidgets, so I'll probably understand comparisons with these toolkits.

    Read the article

  • WPF drawing performance with large numbers of geometries

    - by MyFaJoArCo
    Hello, I have problems with WPF drawing performance. There are a lot of small EllipseGeometry objects (1024 ellipses, for example), which are added to three separate GeometryGroups with different foreground brushes. After, I render it all on simple Image control. Code: DrawingGroup tmpDrawing = new DrawingGroup(); GeometryGroup onGroup = new GeometryGroup(); GeometryGroup offGroup = new GeometryGroup(); GeometryGroup disabledGroup = new GeometryGroup(); for (int x = 0; x < DisplayWidth; ++x) { for (int y = 0; y < DisplayHeight; ++y) { if (States[x, y] == true) onGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else if (States[x, y] == false) offGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); else disabledGroup.Children.Add(new EllipseGeometry(new Rect((double)x * EDGE, (double)y * EDGE, EDGE, EDGE))); } } tmpDrawing.Children.Add(new GeometryDrawing(OnBrush, null, onGroup)); tmpDrawing.Children.Add(new GeometryDrawing(OffBrush, null, offGroup)); tmpDrawing.Children.Add(new GeometryDrawing(DisabledBrush, null, disabledGroup)); DisplayImage.Source = new DrawingImage(tmpDrawing); It works fine, but takes too much time - 0.5s on Core 2 Quad, 2s on Pentium 4. I need <0.1s everywhere. All Ellipses, how you can see, are equal. Background of control, where is my DisplayImage, is solid (black, for example), so we can use this fact. I tried to use 1024 Ellipse elements instead of Image with EllipseGeometries, and it was working much faster (~0.5s), but not enough. How to speed up it? Regards, Oleg Eremeev P.S. Sorry for my English.

    Read the article

  • Multiple SessionFactories in Windows Service with NHibernate

    - by Rob Taylor
    Hi all, I have a Webapp which connects to 2 DBs (one core, the other is a logging DB). I must now create a Windows service which will use the same business logic/Data access DLLs. However when I try to reference 2 session factories in the Service App and call the factory.GetCurrentSession() method, I get the error message "No session bound to current context". Does anyone have a suggestion about how this can be done? public class StaticSessionManager { public static readonly ISessionFactory SessionFactory; public static readonly ISessionFactory LoggingSessionFactory; static StaticSessionManager() { string fileName = System.Configuration.ConfigurationSettings.AppSettings["DefaultNHihbernateConfigFile"]; string executingPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); fileName = executingPath + "\\" + fileName; SessionFactory = cfg.Configure(fileName).BuildSessionFactory(); cfg = new Configuration(); fileName = System.Configuration.ConfigurationSettings.AppSettings["LoggingNHihbernateConfigFile"]; fileName = executingPath + "\\" + fileName; LoggingSessionFactory = cfg.Configure(fileName).BuildSessionFactory(); } } The configuration file has the setting: <property name="current_session_context_class">call</property> The service sets up the factories: private ISession _session = null; private ISession _loggingSession = null; private ISessionFactory _sessionFactory = StaticSessionManager.SessionFactory; private ISessionFactory _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; ... _sessionFactory = StaticSessionManager.SessionFactory; _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; _session = _sessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_session); _loggingSession = _loggingSessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_loggingSession); So finally, I try to call the correct factory by: ISession session = StaticSessionManager.SessionFactory.GetCurrentSession(); Can anyone suggest a better way to handle this? Thanks in advance! Rob

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • Minimal "Task Queue" with stock Linux tools to leverage Multicore CPU

    - by Manuel
    What is the best/easiest way to build a minimal task queue system for Linux using bash and common tools? I have a file with 9'000 lines, each line has a bash command line, the commands are completely independent. command 1 > Logs/1.log command 2 > Logs/2.log command 3 > Logs/3.log ... My box has more than one core and I want to execute X tasks at the same time. I searched the web for a good way to do this. Apparently, a lot of people have this problem but nobody has a good solution so far. It would be nice if the solution had the following features: can interpret more than one command (e.g. command; command) can interpret stream redirects on the lines (e.g. ls > /tmp/ls.txt) only uses common Linux tools Bonus points if it works on other Unix-clones without too exotic requirements.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >