Search Results

Search found 14958 results on 599 pages for 'people'.

Page 506/599 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • How would you go about parsing markdown?

    - by John Leidegren
    You can find the syntax here. The thing is, the source that follows with the download is written in perl. Which I have no intentions of honoring. It is riddled with regex and it relies on MD5 hashes to escape certain characters. Something is just wrong about that! I'm about to hard code a parser for markdown and I'm wonder if someone had some experience with this? Edit: If you don't have anything meaningful to say about the actual parsing of markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution i.e. third-party library). To help a bit with the answers, regex are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar. If you think about markdown, it's fundamentally based around the concept of paragraphs. As such, a reasonable approach might be to split the input into paragraphs. There are many kinds of paragraphs e.g. heading, text, list, blockquote, code. The challenge is thus to identify these paragraphs and in what context they occur. I'll be back with a solution, once I find it's worthy to be shared.

    Read the article

  • Best practice PHP Form Action

    - by Rob
    Hi there i've built a new script (from scratch not a CMS) and i've done alot of work on reducing memory usage and the time it takes for the page to be displayed (caching HTML etc) There's one thing that i'm not sure about though. Take a simple example of an article with a comments section. If the comment form posts to another page that then redirects back to the article page I won't have the problem of people clicking refresh and resending the information. However if I do it that way, I have to load up my script twice use twice as much memory and it takes twice as long whilst i'm still only displaying the page once. Here's an example from my load log. The first load of the article is from the cache, the second rebuilds the page after the comment is posted. Example 1 0 queries using 650856 bytes of memory in 0.018667 - domain.com/article/1/my_article.html 9 queries using 1325723 bytes of memory in 0.075825 - domain.com/article/1/my_article/newcomment.html 0 queries using 650856 bytes of memory in 0.029449 - domain.com/article/1/my_article.html Example 2 0 queries using 650856 bytes of memory in 0.023526 - domain.com/article/1/my_article.html 9 queries using 1659096 bytes of memory in 0.060032 - domain.com/article/1/my_article.html Obviously the time fluctuates so you can't really compare that. But as you can see with the first method I use more memory and it takes longer to load. BUT the first method avoides the refresh problem. Does anyone have any suggestions for the best approach or for alternative ways to avoid the extra load (admittadely minimal but i'd still like to avoid it) whilst also avoiding the refresh problem?

    Read the article

  • A way to enable a LaunchDaemon to output sound?

    - by Varun Mehta
    I have a small Foundation application that checks a website and plays a sound if it sees a certain value. This application successfully plays a sound when I run it as my user from the Terminal. I've configured this app to run as a LaunchDaemon, with the following plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.myorg.appidentifier</string> <key>ProgramArguments</key> <array> <string>/Users/varunm/path/to/cli/application</string> </array> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> </dict> </plist> When I have this service launched I can see it successfully read in and log values from the website, but it never generates any sound. The sound files are located in the same directory as the binary, and I use the following code: NSSound *soundToPlay = [[NSSound alloc] initWithContentsOfFile:@"sound.wav" byReference:NO]; [soundToPlay setDelegate:stopper]; [soundToPlay play]; while (g_keepRunning) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; } [soundToPlay setCurrentTime:0.0]; Is there any way to get my LaunchDaemon application to play sound? This machine gets run by different people, and sometimes has no one logged in, which is why I have to configure it as a LaunchDaemon.

    Read the article

  • MySQL INJECTION Solution...

    - by Val
    I have been bothered for so long by the MySQL injections and was thinking of a way to eliminate this problem all together. I have came up with something below hope that many people will find this useful. The only Draw back I can think of this is the partial search: Jo =returns "John" by using the like %% statement. Here is a php solution: <?php function safeQ(){ $search= array('delete','select');//and every keyword... $replace= array(base64_encode('delete'),base64_encode('select')); foreach($_REQUEST as $k=>$v){ str_replace($search, $replace, $v); } } foo(); function html($str){ $search= array(base64_encode('delete'),base64_encode('select')); $replace= array('delete','select');//and every keyword... str_replace($search, $replace, $str); } //example 1 ... ... $result = mysql_fetch_array($query); echo html($result[0]['field_name']); //example 2 $select = 'SELECT * FROM safeQ($_GET['query']) '; //example 3 $insert = 'INSERT INTO .... value(safeQ($_GET['query']))'; ?> I know, I know that you still could inject using 1=1 or any other type of injections... but this I think could solve half of your problem so the right mysql query is executed. So my question is if anyone can find any draw backs on this then please feel free to comment here. PLEASE GIVE AN ANSWER only if you think that this is a very useful solution and no major drawbacks are found OR you think is a bad idea all together...

    Read the article

  • [Qt] How to get rid of OCI.dll dependency when compiling static

    - by STL
    Hi, My application accesses an Oracle database through Qt's QSqlDatabase class. I'm compiling Qt as static for the release build, but I can't seem to be able to get rid of OCI.dll dependency. I'm trying to link against oci.lib (as available in Oracle's Instant Client with SDK). Here's my configure line : configure -qt-libjpeg -qt-zlib -qt-libpng -nomake examples -nomake demos -no-exceptions -no-stl -no-rtti -no-qt3support -no-scripttools -no-openssl -no-opengl -no-phonon -no-style-motif -no-style-cde -no-style-cleanlooks -no-style-plastique -static -release -opensource -plugin-sql-oci -plugin-sql-sqlite -platform win32-msvc2005 I link against oci.h and oci.lib in the SDK's folder by using : set INCLUDE=C:\oracle\instantclient\sdk\include;%INCLUDE% set LIB=C:\oracle\instantclient\sdk\lib\msvc;%LIB% Then, once Qt is compiled, I use the following lines in my *.pro file : QT += sql CONFIG += static LIBS += C:\oracle\instantclient\sdk\lib\msvc\oci.lib QTPLUGIN += qsqloci Then, in my main.cpp, I add the following commands to statically compile OCI plugin in the application : #include <QtPlugin> Q_IMPORT_PLUGIN(qsqloci) After compiling the project, I test it on my workstation and it works (as I have Oracle Instant Client installed). When I try on another workstation, I always get the message: This application has failed to start because OCI.dll was not found. Re-installing this application may fix this problem. I don't understand why I still need OCI.dll, as my statically linked application is supposed to link to oci.lib instead. Is there any Qt people here that might have a solution for me ? Thanks a lot ! STL

    Read the article

  • Suggestions of the easiest algorithms for some Graph operations

    - by Nazgulled
    Hi, The deadline for this project is closing in very quickly and I don't have much time to deal with what it's left. So, instead of looking for the best (and probably more complicated/time consuming) algorithms, I'm looking for the easiest algorithms to implement a few operations on a Graph structure. The operations I'll need to do is as follows: List all users in the graph network given a distance X List all users in the graph network given a distance X and the type of relation Calculate the shortest path between 2 users on the graph network given a type of relation Calculate the maximum distance between 2 users on the graph network Calculate the most distant connected users on the graph network A few notes about my Graph implementation: The edge node has 2 properties, one is of type char and another int. They represent the type of relation and weight, respectively. The Graph is implemented with linked lists, for both the vertices and edges. I mean, each vertex points to the next one and each vertex also points to the head of a different linked list, the edges for that specific vertex. What I know about what I need to do: I don't know if this is the easiest as I said above, but for the shortest path between 2 users, I believe the Dijkstra algorithm is what people seem to recommend pretty often so I think I'm going with that. I've been searching and searching and I'm finding it hard to implement this algorithm, does anyone know of any tutorial or something easy to understand so I can implement this algorithm myself? If possible, with C source code examples, it would help a lot. I see many examples with math notations but that just confuses me even more. Do you think it would help if I "converted" the graph to an adjacency matrix to represent the links weight and relation type? Would it be easier to perform the algorithm on that instead of the linked lists? I could easily implement a function to do that conversion when needed. I'm saying this because I got the feeling it would be easier after reading a couple of pages about the subject, but I could be wrong. I don't have any ideas about the other 4 operations, suggestions?

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • How to delete sentences starting with a lower case letter?

    - by Ron
    Hello: In the example below the following regex (".*?") was used to remove all dialogue first. The next step is to remove all remaining sentences starting with a lower case letter. Only sentences starting with an upper case letter should remain. Example: exclaimed Wade. Indeed, below them were villages, of crude huts made of timber and stone and mud. Rubble work walls, for they needed little shelter here, and the people were but savages. asked Arcot, his voice a bit unsteady with suppressed excitement. replied Morey without turning from his station at the window. Below them now, less than half a mile down on the patchwork of the Nile valley, men were standing, staring up, collecting in little groups, gesticulating toward the strange thing that had materialized in the air above them. In the example above the following should be deleted only: exclaimed Wade. asked Arcot, his voice a bit unsteady with suppressed excitement. replied Morey without turning from his station at the window. A useful regex or simple Perl or python code is appreciated. I'm using version 7 of Textpipe. Thanks.

    Read the article

  • Converted some PHP functions to c# but getting different results

    - by Tom Beech
    With a bit of help from people on here, i've converted the following PHP functions to C# - But I get very different results between the two and can't work out where i've gone wrong: PHP: function randomKey($amount) { $keyset = "abcdefghijklmABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; $randkey = ""; for ($i=0; $i<$amount; $i++) $randkey .= substr($keyset, rand(0, strlen($keyset)-1), 1); return $randkey; } public static function hashPassword($password) { $salt = self::randomKey(self::SALTLEN); $site = new Sites(); $s = $site->get(); return self::hashSHA1($s->siteseed.$password.$salt.$s->siteseed).$salt; } c# public static string randomKey(int amount) { string keyset = "abcdefghijklmABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; string randkey = string.Empty; Random random = new Random(); for (int i = 0; i < amount; i++) { randkey += keyset.Substring(0, random.Next(2, keyset.Length - 2)); } return randkey; } static string hashPassword(string password) { string salt = randomKey(4); string siteSeed = "6facef08253c4e3a709e17d9ff4ba197"; return CalculateSHA1(siteSeed + password + salt + siteSeed) + siteSeed; } static string CalculateSHA1(string ipString) { SHA1 sha1 = new SHA1CryptoServiceProvider(); byte[] ipBytes = Encoding.Default.GetBytes(ipString.ToCharArray()); byte[] opBytes = sha1.ComputeHash(ipBytes); StringBuilder stringBuilder = new StringBuilder(40); for (int i = 0; i < opBytes.Length; i++) { stringBuilder.Append(opBytes[i].ToString("x2")); } return stringBuilder.ToString(); } EDIT The string 'password' in the PHP function comes out as "d899d91adf31e0b37e7b99c5d2316ed3f6a999443OZl" in the c# it comes out as: "905d25819d950cf73f629fc346c485c819a3094a6facef08253c4e3a709e17d9ff4ba197"

    Read the article

  • ASP.NET MVC unit testing

    - by Simon Lomax
    Hi, I'm getting started with unit testing and trying to do some TDD. I've read a fair bit about the subject and written a few tests. I just want to know if the following is the right approach. I want to add the usual "contact us" facility on my web site. You know the thing, the user fills out a form with their email address, enters a brief message and hits a button to post the form back. The model binders do their stuff and my action method accepts the posted data as a model. The action method would then parse the model and use smtp to send an email to the web site administrator infoming him/her that somebody filled out the contact form on their site. Now for the question .... In order to test this, would I be right in creating an interface IDeliver that has a method Send(emailAddress, message) to accept the email address and message body. Implement the inteface in a concrete class and let that class deal with smtp stuff and actually send the mail. If I add the inteface as a parameter to my controller constructor I can then use DI and IoC to inject the concrete class into the controller. But when unit testing I can create a fake or mock version of my IDeliver and do assertions on that. The reason I ask is that I've seen other examples of people generating interfaces for SmtpClient and then mocking that. Is there really any need to go that far or am I not understanding this stuff?

    Read the article

  • What is the best approach for creating a Common Information Model?

    - by Kaiser Advisor
    Hi, I would like to know the best approach to create a Common Information Model. Just to be clear, I've also heard it referred to as a canonical information model, semantic information model, and master data model - As far as I can tell, they are all referring to the same concept. I've heard in the past that a combined "top-down" and "bottom-up" approach is best. This has the advantage of incorporating "Ivory tower" architects and developers - The work will meet somewhere in the middle and usually be both logical and practical. However, this involves bringing in a lot of people with different skill sets. I've also seen a couple of references to the Distributed Management Task Force, but I can't glean much on best practices in terms of CIM development. This is something I'm quite interested in getting some feedback on since having a strong CIM is a prerequisite to SOA. Thanks for your help! KA Update I've heard another strategy goes along with overall SOA implementation: Get the business involved, and seek executive sponsorship. This would be part of the "Top-down" effort.

    Read the article

  • Recognize Dates In A String

    - by Tim Scott
    I want a class something like this: public interface IDateRecognizer { DateTime[] Recognize(string s); } The dates might exist anywhere in the string and might be any format. For now, I could limit to U.S. culture formats. The dates would not be delimited in any way. They might have arbitrary amounts of whitespace between parts of the date. The ideas I have are: ANTLR Regex Hand rolled I have never used ANTLR, so I would be learning from scratch. I wonder if there are libraries or code samples out there that do something similar that could jump start me. Is ANTLR too heavy for such a narrow use? I have used Regex a lot before, but I hate it for all the reasons that most people hate it. I could certainly hand roll it but I'd rather not re-solve a solved problem. Suggestions? UPDATE: Here is an example. Given this input: This is a date 11/3/63. Here is another one: November 03, 1963; and another one Nov 03, 63 and some more (11/03/1963). The dates could be in any U.S. format. They might have dashes like 11-2-1963 or weird extra whitespace inside like this: Nov   3,   1963, and even maybe the comma is missing like [Nov 3 63] but that's an edge case. The output should be an array of seven DateTimes. Each date would be the same: 11/03/1963 00:00:00.

    Read the article

  • .Net System.Mail.Message adding multiple "To" addresses

    - by Matt Dawdy
    I just hit something I think is inconsistent, and wanted to see if I'm doing something wrong, if I'm an idiot, or... MailMessage msg = new MailMessage(); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); msg.To.Add("[email protected]"); Really only sends this email to 1 person, the last one. To add multiples I have to do this: msg.To.Add("[email protected],[email protected],[email protected],[email protected]"); I don't get it. I thought I was adding multiple people to the "To" address collection, but what I was doing was replacing it. I think I just realized my error -- to add one item to the collection, use .To.Add(new MailAddress("[email protected]")) If you use just a string, it replaces everything it had in its list. Ugh. I'd consider this a rather large gotcha! Since I answered my own question, but I think this is of value to have in the stackoverflow archive, I'll still ask it. Maybe someone even has an idea of other traps that you can fall into.

    Read the article

  • What's your preferred pointer declaration style, and why?

    - by Owen
    I know this is about as bad as it gets for "religious" issues, as Jeff calls them. But I want to know why the people who disagree with me on this do so, and hear their justification for their horrific style. I googled for a while and couldn't find a style guide talking about this. So here's how I feel pointers (and references) should be declared: int* pointer = NULL; int& ref = *pointer; int*& pointer_ref = pointer; The asterisk or ampersand goes with the type, because it modifies the type of the variable being declared. EDIT: I hate to keep repeating the word, but when I say it modifies the type I'm speaking semantically. "int* something;" would translate into English as something like "I declare something, which is a pointer to an integer." The "pointer" goes along with the "integer" much more so than it does with the "something." In contrast, the other uses of the ampersand and asterisk, as address-of and dereferencing operators, act on a variable. Here are the other two styles (maybe there are more but I really hope not): int *ugly_but_common; int * uglier_but_fortunately_less_common; Why? Really, why? I can never think of a case where the second is appropriate, and the first only suitable perhaps with something like: int *hag, *beast; But come now... multiple variable declarations on one line is kind of ugly form in itself already.

    Read the article

  • Using DTOs and BOs

    - by ryanzec
    One area of question for me about DTOs/BOs is about when to pass/return the DTOs and when to pass/return the BOs. My gut reaction tells me to always map NHibernate to the DTOs, not BOs, and always pass/return the DTOs. Then whenever I needed to perform business logic, I would convert my DTO into a BO. The way I would do this is that my BO would have a have a constructor that takes a parameter that is the type of my interface (that defines the required fields/properties) that both my DTO and BO implement as the only argument. Then I would be able to create my BO by passing it the DTO in the constructor (since both with implement the same interface, they both with have the same properties) and then be able to perform my business logic with that BO. I would then also have a way to convert a BO to a DTO. However, I have also seen where people seem to only work with BOs and only work with DTOs in the background where to the user, it looks like there are no DTOs. What benefits/downfalls are there with this architecture vs always using BO's? Should I always being passing/returning either DTOs or BOs or mix and match (seems like mixing and matching could get confusing)?

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • When is someone else's code I use from the internet "mine"?

    - by robault
    I'm building a library from methods that I've found on the internet. Some are free to use or modify with no requirements, others say that if I leave a comment in the code it's okay to use, others say when I use the code I have to attribute the use of someone's code in my application (in the credits for my app I guess). What I've been doing is reorganizing classes, renaming methods, adding descriptions (code comments), renaming the parameters and names inside the methods to something meaningful, optimizing loops if applicable, changing return types, adding try/catch/throw blocks, adding parameter checks and cleaning up resources in the methods. For example; I didn't come up with the algorithm for blurring a Bitmap but I've taken the basic example of iterating through the pixels and turned it into a decent library method (applying the aforementioned modifications). I understand how to go about building it now myself but I didn't actually hit the keystrokes to make it and I couldn't have come up with it before learning from their example. What about code people get in answers on Stackoverflow or examples from Codeproject? At what point can I drop their requirements because at n% their code became mine? FWIW I intend on using the libraries to create products that I will sell.

    Read the article

  • Microsoft Reporting: Setting subreport parameters in code

    - by Svish
    How can I set a parameter of a sub-report? I have successfully hooked myself up to the SubreportProcessing event, I can find the correct sub-report through e.ReportPath, and I can add datasources through e.DataSources.Add. But I find no way of adding report parameters?? I have found people suggesting to add them to the master report, but I don't really want to do that, since the master report shouldn't have to be connected to the sub-report at all, other than that it is wrapping the sub-report. I am using one report as a master template, printing name of the report, page numbers etc. And the subreport is going to be the report itself. And if I could only find a way to set those report parameters of the sub-report I would be good to go... Clarification: Creating/Defining the parameters is not the problem. The problem is to set their values. I thought the natural thing to do was to do it in the SubreportProcessing event. And the SubreportProcessingEventArgs do in fact have a Parameters property. But it is read only! So how do you use that? How can I set their value?

    Read the article

  • Excel and SQL, order by help

    - by perlnoob
    Im stuck in Excel 2007, running a query, it worked until I wanted to add a 2nd row containing "field 2". Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Chicago') Union all Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Denver') Order By "Site Location" ASC; Basically I want 2 different cells for the locations, example name - Chicago - denver user1 - 100 - 20 user2 - 34 - 1002 Right now for some odd reason, its combining it like: name - chicago user1 - 120 user2 - 1036 Please note updating to 2010 beta is not a viable option for me at this point. Any and all input that will help me is greatly apprecaited. I have read over http://www.techonthenet.com/sql/order_by.php however its not gotten me very far in this question. If you have another SQL resource you recomend for people trying to get their feet wet, I'd greatly apprecaite it. If it helps all the info is on the same table.

    Read the article

  • What is the easiest way to read wav-files using Python [summary]?

    - by Roman
    I want to use Python to access a wav-file and write its content in a form which allows me to analyze it (let's say arrays). I heard that "audiolab" is a suitable tool for that (it transforms numpy arrays into wav and vica versa). I have installed the "audiolab" but I had a problem with the version of numpy (I could not "from numpy.testing import Tester"). I had 1.1.1. version of numpy. I have installed a newer version on numpy (1.4.0). But then I got a new set of errors: Traceback (most recent call last): File "test.py", line 7, in import scikits.audiolab File "/usr/lib/python2.5/site-packages/scikits/audiolab/init.py", line 25, in from pysndfile import formatinfo, sndfile File "/usr/lib/python2.5/site-packages/scikits/audiolab/pysndfile/init.py", line 1, in from _sndfile import Sndfile, Format, available_file_formats, available_encodings File "numpy.pxd", line 30, in scikits.audiolab.pysndfile._sndfile (scikits/audiolab/pysndfile/_sndfile.c:9632) ValueError: numpy.dtype does not appear to be the correct type object I gave up to use audiolab and thought that I can use "wave" package to read in a wav-file. I asked a question about that but people recommended to use scipy instead. OK, I decided to focus on scipy (I have 0.6.0. version). But when I tried to do the following: from scipy.io import wavfile x = wavfile.read('/usr/share/sounds/purple/receive.wav') I get the following: Traceback (most recent call last): File "test3.py", line 4, in <module> from scipy.io import wavfile File "/usr/lib/python2.5/site-packages/scipy/io/__init__.py", line 23, in <module> from numpy.testing import NumpyTest ImportError: cannot import name NumpyTest So, I gave up to use scipy. Can I use just wave package? I do not need much. I just need to have content of wav-file in human readable format and than I will figure out what to do with that.

    Read the article

  • Is this a good or bad way to use constructor chaining? (... to allow for testing).

    - by panamack
    My motivation for chaining my class constructors here is so that I have a default constructor for mainstream use by my application and a second that allows me to inject a mock and a stub. It just seems a bit ugly 'new'-ing things in the ":this(...)" call and counter-intuitive calling a parametrized constructor from a default constructor , I wondered what other people would do here? (FYI - SystemWrapper) using SystemWrapper; public class MyDirectoryWorker{ // SystemWrapper interface allows for stub of sealed .Net class. private IDirectoryInfoWrap dirInf; private FileSystemWatcher watcher; public MyDirectoryWorker() : this( new DirectoryInfoWrap(new DirectoryInfo(MyDirPath)), new FileSystemWatcher()) { } public MyDirectoryWorker(IDirectoryInfoWrap dirInf, FileSystemWatcher watcher) { this.dirInf = dirInf; if(!dirInf.Exists){ dirInf.Create(); } this.watcher = watcher; watcher.Path = dirInf.FullName; watcher.NotifyFilter = NotifyFilters.FileName; watcher.Created += new FileSystemEventHandler(watcher_Created); watcher.Deleted += new FileSystemEventHandler(watcher_Deleted); watcher.Renamed += new RenamedEventHandler(watcher_Renamed); watcher.EnableRaisingEvents = true; } public static string MyDirPath{get{return Settings.Default.MyDefaultDirPath;}} // etc... }

    Read the article

  • How to contain the Deepwater Horizon oil spill? [closed]

    - by Yarin
    This is obviously not programming, but it's important and we're smart people, so let's give it a shot. (BP has actually begun soliciting suggestions for how to deal with the crisis http://www.deepwaterhorizonresponse.com/go/doc/2931/546759/, confirming that they don't have a clue) I'll start with my own proposal... Anchored Chute: A large-diameter, collapsible, flexible tube/hose with a wide mouth on one end is anchored over the leak. There's no need for a hermetic seal, the opening just needs to be big enough to form a canopy over the leak area. The rest of the tubing can just be dumped on the sea floor. Since oil is denser than water, the oily water that flows into the mouth eventually inflates the tube and raises the opposite end to the surface, where it can be collected (Like those inflatable dancing air socks at car dealerships). Further buoyancy could be added with floats attached to the tube at intervals. I think this method would not be as susceptible to the problems BP had with the containment dome, where a rigid, metal casing froze up with crystallized hydrates, as we would not be trying to contain the full pressure of the well, but would be using the natural buoyancy of the oil to channel its flow, and with a much larger opening.

    Read the article

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • Is there a standard for storing normalized phone numbers in a database?

    - by Eric Z Beard
    What is a good data structure for storing phone numbers in database fields? I'm looking for something that is flexible enough to handle international numbers, and also something that allows the various parts of the number to be queried efficiently. [Edit] Just to clarify the use case here: I currently store numbers in a single varchar field, and I leave them just as the customer entered them. Then, when the number is needed by code, I normalize it. The problem is that if I want to query a few million rows to find matching phone numbers, it involves a function, like where dbo.f_normalizenum(num1) = dbo.f_normalizenum(num2) which is terribly inefficient. Also queries that are looking for things like the area code become extremely tricky when it's just a single varchar field. [Edit] People have made lots of good suggestions here, thanks! As an update, here is what I'm doing now: I still store numbers exactly as they were entered, in a varchar field, but instead of normalizing things at query time, I have a trigger that does all that work as records are inserted or updated. So I have ints or bigints for any parts that I need to query, and those fields are indexed to make queries run faster.

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >