Search Results

Search found 498 results on 20 pages for 'quicker'.

Page 16/20 | < Previous Page | 12 13 14 15 16 17 18 19 20  | Next Page >

  • Building a ctypes-"based" C library with distutils

    - by Robie Basak
    Following this recommendation, I have written a native C extension library to optimise part of a Python module via ctypes. I chose ctypes over writing a CPython-native library because it was quicker and easier (just a few functions with all tight loops inside). I've now hit a snag. If I want my work to be easily installable using distutils using python setup.py install, then distutils needs to be able to build my shared library and install it (presumably into /usr/lib/myproject). However, this not a Python extension module, and so as far as I can tell, distutils cannot do this. I've found a few references to people other people with this problem: Someone on numpy-discussion with a hack back in 2006. Somebody asking on distutils-sig and not getting an answer. Somebody asking on the main python list and being pointed to the innards of an existing project. I am aware that I can do something native and not use distutils for the shared library, or indeed use my distribution's packaging system. My concern is that this will limit usability as not everyone will be able to install it easily. So my question is: what is the current best way of distributing a shared library with distutils that will be used by ctypes but otherwise is OS-native and not a Python extension module? Feel free to answer with one of the hacks linked to above if you can expand on it and justify why that is the best way. If there is nothing better, at least all the information will be in one place.

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • How can I speed up the "finally get it" process?

    - by Earlz
    Hello, I am a hobby programmer and began when I was about 13. I'm currently going to college(freshman) for my computer science degree(which means, I'm still in the stuff I already know such as for loops). I've been programming professionally for a start up for about 9 months or so now. I have a serious problem though. I think that almost all of the code I write is perfect. Now I remember reading an article somewhere where there is like 3 stages of learning programming You don't know anything and you know you don't know anything. You don't know anything but you think you do. You finally get and accept that you don't know anything. (if someone finds that article tell me and I'll give a link) So right now, I'm at stage 2. How can I get to stage 3 quicker? The more and more of some people's code I read I think "this is complete rubbish, I would've done it like..." and I really dislike how I think that way. (and this fairly recently began happening, like over the past year)

    Read the article

  • Using Interface Builder efficiently

    - by Aaron Wetzler
    I am new to iPhone and objective c. I have spent hours and hours and hours reading documents and trying to understand how things work. I have RTFM or at least am in the process. My main problem is that I want to understand how to specify where an event gets passed to and the only way I have been able to do it is by specifying delegates but I am certain there is an easier/quicker way in IB. So, an example. Lets say I have 20 different views and view controllers and one MyAppDelegate. I want to be able to build all of these different Xib files in IB and add however many buttons and text fields and whatever and then specify that they all produce some event in the MyAppDelegate object. To do this I added a MyAppDelegate object in each view controller in IB's list view. Then I created an IBAction method in MyAppDelegate in XCode and went back to IB and linked all of the events to the MyAppDelegate object in each Xib file. However when I tried running it it just crashed with a bad read exception. My guess is that each Xib file is putting a MyAppDelegate object pointer that has nothing to do with the eventual MyAppDelegate adress that will actually be created at runtime. So my question is...how can I do this?!!!

    Read the article

  • Stage untracked files for commit without staging tracked file changes

    - by Blair Holloway
    Oftentimes, when using git, I find myself in this situation: I have changes to several files, but I only want to commit parts of them. I have added several untracked files, which I want to track and commit. Solving the first part is easy; I run: git add -p Then, I choose which hunks to stage, and which hunks remain in my working tree, but unstaged. However, git's patch mode skips over untracked files. What I would like to do is something like: git add --untracked But no such option appears to exist. If I have, say, six untracked files, I could stage them using add in interactive mode and the add untracked option, like so: git add -i a<CR> 1<CR> 2<CR> 3<CR> 4<CR> 5<CR> 6<CR> <CR> q<CR> I feel like there is, or should be, a quicker way of doing this, though. What am I missing?

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • Get Active Directory Attributes for Users on Legacy Exchange Servers

    - by Jason Hindson
    I would like to create a CSV file of the users on our Exchange 2003 servers, and include some attributes from their AD account. In particular, I would like to pull certain AD values for the users with RecipientTypeDetails = LegacyMailbox. I have tried a few different methods for targeting and filtering (ldapfilter, filter, objectAttribute, etc.) these users, with little success. The Exchange 2003 PowerPack for PowerGUI was helpful, but permissions issues and using the Exchange_Mailbox class are not challenges I want to overcome. I was finally able to create a working script, but it is very slow. The script I've created below is currently working, although it is on track to take about 4+ hours to complete. I'm am looking for suggestions for improving the efficiency of my script or otherwise obtaining this data in a quicker manner. Here is the script: $ADproperties = 'City','Company','department','Description','DistinguishedName','DisplayName','FirstName','l','LastName','msExchHomeServerName','NTAccountName','ParentContainer','physicaldeliveryofficename','SamAccountName','useraccountcontrol','UserPrincipalName' get-user -ResultSize Unlimited -ignoredefaultscope -RecipientTypeDetails LegacyMailbox | foreach {Get-QADUser $_.name -DontUseDefaultIncludedProperties -IncludedProperties $ADproperties} | select $ADproperties | epcsv C:\UserListBuilder\exchUsers.csv -notype Any help you can provide will be greatly appreciated!

    Read the article

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • Being pressured to GOTO the dark-side

    - by Dan McG
    We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spagetti code. Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spagetti Monster itself. Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here... Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it? tldr; How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'? I'm beginning to fear we may loses valuable developers to the raptors over this...

    Read the article

  • Creating a Linq->HQL provider

    - by Mike Q
    Hi all, I have a client application that connects to a server. The server uses hibernate for persistence and querying so it has a set of annotated hibernate objects for persistence. The client sends HQL queries to the server and gets responses back. The client has an auto-generated set of objects that match the server hibernate objects for query results and basic persistence. I would like to support using Linq to query as well as Hql as it makes the queries typesafe and quicker to build (no more typos in HQL string queries). I've looked around at the following but I can't see how to get them to fit with what I have. NHibernate's Linq provider - requires using NHibernate ISession and ISessionFactory, which I don't have LinqExtender - requires a lot of annotations on the objects and extending a base type, too invasive What I really want is something that will generate give me a nice easy to process structure to build the HQL queries from. I've read most of a 15 page article written by one of the C# developers on how to create custom providers and it's pretty fraught, mainly because of the complexity of the expression tree. Can anyone suggest an approach for implementing Linq - HQL translation? Perhaps a library that will the cleanup of the expression tree into something more SQL/HQLish. I would like to support select/from/where/group by/order by/joins. Not too worried about subqueries.

    Read the article

  • What is the Pythonic way to implement a simple FSM?

    - by Vicky
    Yesterday I had to parse a very simple binary data file - the rule is, look for two bytes in a row that are both 0xAA, then the next byte will be a length byte, then skip 9 bytes and output the given amount of data from there. Repeat to the end of the file. My solution did work, and was very quick to put together (even though I am a C programmer at heart, I still think it was quicker for me to write this in Python than it would have been in C) - BUT, it is clearly not at all Pythonic and it reads like a C program (and not a very good one at that!) What would be a better / more Pythonic approach to this? Is a simple FSM like this even still the right choice in Python? My solution: #! /usr/bin/python import sys f = open(sys.argv[1], "rb") state = 0 if f: for byte in f.read(): a = ord(byte) if state == 0: if a == 0xAA: state = 1 elif state == 1: if a == 0xAA: state = 2 else: state = 0 elif state == 2: count = a; skip = 9 state = 3 elif state == 3: skip = skip -1 if skip == 0: state = 4 elif state == 4: print "%02x" %a count = count -1 if count == 0: state = 0 print "\r\n"

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Android - Where to store generated bitmaps?

    - by Josh
    I've got an app which dynamically generates anywhere from 6 to 100 small bitmaps for the user to move around the screen in a given session. I currently generate them in onCreate and store them to the sd card, so that after an orientation change I can grab them out of external storage and display them again. However, this takes time (the loading) and I'd like to keep the bitmap references around between lifecyle changes for quicker access. My question is, is there a better place to store my generated bitmaps? I was thinking about creating a static storage library in my base activity, something that would only need to be reloaded when the app is completely removed from memory (shutdown, other apps need resources, 30 minute restart, etc). Ideally, I'd like the user to be able to back out to the title screen, click a "Resume" button, and in onCreate I just have access to those resident bitmap references instead of having to load them from storage again. For this reason I don't think Activity.onRetainNonConfigurationInstance is what I need. Alternatively, is there a better way to handle multiple generated bitmaps than what I'm doing or the plan I described?

    Read the article

  • What runs faster? Wordpress or Drupal 6.x?

    - by electblake
    So... I run a pretty large Wordpress blog. Currently it gets around 20k+ pageviews a day, and its always a struggle to keep the bad boy running quickly - I currently run a vps.net with CentOS 5.3 I am also Drupal developer by trade so I love the CMS Framework for its versatility and the portability (I can take work from one site and implement on another with great ease) MY QUESTION IS: What is faster then? Wordpress 3.x & Drupal 6.x I'd love to migrate my site to Drupal to be able to roll out new features etc (which I find awkward to do in Wordpress) but I am scared that Drupal may not be able to handle the traffic. Any opinions? I know that some major players use Drupal - as Dries documents well on his blog but I'm not under any illusions that Drupal can be a real hog. Thanks for any/all help! Please try to avoid server optimization talk unless it pertains to Wordpress or Drupal 6.x specifically, I love to learn more about optimizations but I do want to sort out which platform is quicker :) p.s - I realize the fastest option is to use a lower-level framework (with less overhead) like CakePHP etc but assume that isn't an option ;)

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Linq. Help me tune this!

    - by dtrick
    I have a linq query that is causing some timeout issues. Basically, I have a query that is returning the top 100 results from a table that has approximately 500,000 records. Here is the query: using (var dc = CreateContext()) { var accounts = string.IsNullOrEmpty(searchText) ? dc.Genealogy_Accounts .Where(a => a.Genealogy_AccountClass.Searchable) .OrderByDescending(a => a.ID) .Take(100) : dc.Genealogy_Accounts .Where(a => (a.Code.StartsWith(searchText) || a.Name.StartsWith(searchText)) && a.Genealogy_AccountClass.Searchable) .OrderBy(a => a.Code) .Take(100); return accounts.Select(a => } } Oddly enough it is the first linq query that is causing the timeout. I thought that by doing a 'Take' we wouldn't need to scan all 500k of records. However, that must be what is happening. I'm guessing that the join to find what is 'searchable' is causing the issue. I'm not able to denormalize the tables... so I'm wondering if there is a way to rewrite the linq query to get it to return quicker... or if I should just write this query as a Stored Procedure (and if so, what might it look like). Thanks.

    Read the article

  • Hibernate Exception Fixed By Alt+Tab

    - by Lee Theobald
    Hi all, I've got a very curious problem in Hibernate that I would like some opinions on. In my code if I do the following: Go to page A Click a link on page A to be taken to page B Click on data item on page B Exception thrown I get an error telling me: failed to lazily initialize a collection of role: XYZ, no session or session was closed Fair enough. But when I do the same thing but add an alt+tab in the middle, everything is fine. E.g. Go to page A Click a link on page A to be taken to page B Hit ALt+Tab to switch to another application Hit ALt+Tab to switch back to the web browser Click on data item on page B Everything is fine. I'm a little confused as to how switching focus from my application makes it act as I want it to. Does anyone have any light to shine on the subject? I don't think it's a locking issue as even if I do the second set of steps quicker than the first, still no error. It's a Seam application using Hibernate 3.3.2.GA & 3.4.0.GA.

    Read the article

  • Query distinct list of choices for Django form with App Engine Datastore

    - by Brian
    I've been trying to figure this out for hours across a couple of days, and can not get it to work. I've been everywhere. I'll continue trying to figure it out, but was hoping for a quicker solution. I'm using App Engine datastore + Django. Using a query in a view and custom forms, I was able to get a list to the form but then I was not able to post. I have been trying to figure out how to dynamically add the choices as part of the Django form... I've tried various ways with no success. Help! Below are the two models. I'd like to get a distinct list of address_id to show in the location field in InfoForm. This fields could (and maybe should) be named the same, but I thought it'd be easier if they were named different. class Info(db.Model): user = db.UserProperty() location = db.StringProperty() info = db.StringProperty() created = db.DateTimeProperty(auto_now_add=True) modified = db.DateTimeProperty(auto_now=True) class Locations(db.Model): user = db.UserProperty() address_id = db.StringProperty() address = db.StringProperty() class InfoForm(djangoforms.ModelForm): info = forms.ChoiceField(choices=INFO_CHOICES) location = forms.ChoiceField() class Meta: model = Info exclude = ['user','created','modified']

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • Is this the best way to make an API request using PHP CURL?

    - by Abs
    Hello all, I have a site that has a simple API which can be used via http. I wish to make use of the API and submit data about 1000-1500 times at one time. Here is their API: http://api.jum.name/ I have constructed the URL to make a submission but now I am wondering what is the best way to make these 1000-1500 API GET requests? Here is the PHP CURL implementation I was thinking of: $add = 'http://www.mysite.com/3rdparty/API/api.php?fn=post&username=test&password=tester&url=http://google.com&category=21&title=story a&content=content text&tags=Season,news'; curl_setopt ($ch, CURLOPT_URL, "$add"); curl_setopt ($ch, CURLOPT_POST, 0); curl_setopt ($ch, CURLOPT_COOKIEFILE, 'files/cookie.txt'); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $postdata = curl_exec ($ch); Shall I close the CURL connection every time I make a submission? Can I re-write the above in a better way that will make these 1000-1500 submissions quicker? Thanks all

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • SELECT SQL Variable - should i avoid using this syntax and always use SET?

    - by Sholom
    Hi All, This may look like a duplicate to here, but it's not. I am trying to get a best practice, not a technical answer (which i already (think) i know). New to SQL Server and trying to form good habits. I found a great explanation of the functional differences between SET @var = and SELECT @var = here: http://vyaskn.tripod.com/differences_between_set_and_select.htm To summarize what each has that the other hasn't (see source for examples): SET: ANSI and portable, recommended by Microsoft. SET @var = (SELECT column_name FROM table_name) fails when the select returns more then one value, eliminating the possibility of unpredictable results. SET @var = (SELECT column_name FROM table_name) will set @var to NULL if that's what SELECT column_name FROM table_name returned, thus never leaving @var at it's prior value. SELECT: Multiple variables can be set in one statement Can return multiple system variables set by the prior DML statement SELECT @var = column_name FROM table_name would set @var to (according to my testing) the last value returned by the select. This could be a feature or a bug. Behavior can be changed with SELECT @j = (SELECT column_name FROM table_name) syntax. Speed. Setting multiple variables with a single SELECT statement as opposed to multiple SET/SELECT statements is much quicker. He has a sample test to prove his point. If you could design a test to prove the otherwise, bring it on! So, what do i do? (Almost) always use SET @var =, using SELECT @var = is messy coding and not standard. OR Use SELECT @var = freely, it could accomplish more for me, unless the code is likely to be ported to another environment. Thanks

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20  | Next Page >