Search Results

Search found 50994 results on 2040 pages for 'simple solution'.

Page 419/2040 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • The "correct" way to define an exception in Python without PyLint complaining

    - by Evgeny
    I'm trying to define my own (very simple) exception class in Python 2.6, but no matter how I do it I get some warning. First, the simplest way: class MyException(Exception): pass This works, but prints out a warning at runtime: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 OK, so that's not the way. I then tried: class MyException(Exception): def __init__(self, message): self.message = message This also works, but PyLint reports a warning: W0231: MyException.__init__: __init__ method from base class 'Exception' is not called. So I tried calling it: class MyException(Exception): def __init__(self, message): super(Exception, self).__init__(message) self.message = message This works, too! But now PyLint reports an error: E1003: MyException.__init__: Bad first argument 'Exception' given to super class How the hell do I do such a simple thing without any warnings?

    Read the article

  • Most cost effective way to target multiple mobile platforms

    - by niidto
    Hi I have been given the tasks of speccing a mobile application, which will need to run on approx. 1000 devices. These devices already exist, and consist of iPhones, BlackBerrys, Androids, Windows Mobile and Netbooks. The application will have simple reporting capability, and a collection of forms. Anyway, the obvious solution would be to develop some browser based solution, although given the occasionally connected nature of the devices, there's a potential for data to get lost / not saved. So instead of creating a complex application for each platform, I was thinking we could build what is effectively a form generator, with basic offline storage capability (text files), designed to run on each device, and have the device generate a form, based on for example an XML file that it could request from a server somewhere, resulting in minimal specialist development costs, and the ability to run most of the logic from the server end, with the devices being dumb clients that render forms and upload the data when there is an available connection. Anyway, my question summarised is, how have you made the decision on supporting multiple devices for your application. Is this always an unavoidable problem, and you just have to make the call to support 1 or 2, or pay for developers to write code for each platform, or alternatively supply pre-installed devices to the company? Many thanks James

    Read the article

  • Aggregate Functions and Group By Problems

    - by David Stein
    If we start with the following simple SQL statement which works. SELECT sor.FPARTNO, sum(sor.FUNETPRICE) FROM sorels sor GROUP BY sor.FPARTNO FPartNo is the part number and the Funetprice is obviously the net price. The user also wants the description and this causes a problem. If I follow up with this: SELECT sor.FPARTNO, sor.fdesc, sum(sor.FUNETPRICE) FROM sorels sor GROUP BY sor.FPARTNO, sor.fdesc If there are multiple variations of the description for that part number, typically very small variations in the text, then I don't actually aggregate on the part number. Make sense? I'm sure this must be simple. How can I return the first fdesc that corresponds to the part number? Any of the description variations would suffice as they are almost entirely identical. Edit: The description is a text field.

    Read the article

  • How to execute PHPUnit?

    - by user1280667
    PHPUnit can execute script like this: phpunit --log-junit classname filename.php (i need the XML report , for my continus integreation platform) but my problem is that i work with a MVC framework and all pages are called through pathofproject/indexCLI.php module=moduleName class=className ect with 3 arguments in total(when i use the shell commande and path/index.php argum=... with url) so i cant call phpunit pathofproject/indexCLI.php module=moduleName class=className . So i think to a lot of solution , i hope you can help me to use one of them. first how can i use phpunit commande with this type of calling, because i cant do it because he is waiting a classname and a filename (default comportement) if it possible !! when i call the same link in shell like this : php path/indexCLI.php module="blabla" ect ... i have the result of assertion in my consol , but cant use XML Junit option , can i do it ? my last solution is to call the link in a navigator like mozzila , but i dont know how to say to phpunit runner to chose XML report and not HTML report. the aim for me , is to have a XML report .

    Read the article

  • Conceptual question about NSAutoreleasePools

    - by ryyst
    In my Cocoa program, wouldn't a really simple way of dealing with autoreleased objects be to just create a timer object inside the app delegate that calls the following method e.g. every 10 seconds: if (pool) { // Release & drain the current pool to free the memory. [pool release]; } // Create a new pool. pool = [[NSAutoreleasePool alloc] init]; The only problems I can imagine are: 1) If the above code runs in a separate thread, an object might get autoreleased between the release call to the old pool and the creation of the new pool - that seems highly unlikely though. 2) It's obviously not that efficient, because the pool might get released if there's nothing in it. Likewise, in the 10 second gap, many many objects might be autoreleased, causing the pool to grow a lot. Still, the above solution seems pretty suitable to small and simple projects. Why doesn't anybody use it? What's the best practice of using NSAutoreleasePools?

    Read the article

  • How to convert a .NET WebService-Method-Result (Soap) into its original datatype?

    - by Marc
    Hello everyone. I have two "identical" webservices (Soap) on two different servers. Don't ask why :-) WebService-1 decides if it handels the request itself or if it passes the request to WebService-2. If so, the response of WebService-2 should directly be returned from WebService-1. The response datatype is complex and self defined. With simple datatypes like 'int or 'string' there would be no problem. The response of WebService-2 is a serialized object (I think it is called "stubs") and theredore it is not possibel to pass this object through as the response of WebService-1 because the type of the objects doesn't match. Is there a simple way to convert the serialised datatype into its original type without buiding a complex converter?

    Read the article

  • Recommended Python publish/subscribe/dispatch module ?

    - by Eli Bendersky
    From PyPubSub: Pypubsub provides a simple way for your Python application to decouple its components: parts of your application can publish messages (with or without data) and other parts can subscribe/receive them. This allows message "senders" and message "listeners" to be unaware of each other: one doesn't need to import the other a sender doesn't need to know "who" gets the messages, what the listeners will do with the data, or even if any listener will get the message data. similarly, listeners don't need to worry about where messages come from. This is a great tool for implementing a Model-View-Controller architecture or any similar architecture that promotes decoupling of its components. There seem to be quite a few Python modules for publishing/subscribing floating around the web, from PyPubSub, to PyDispatcher to simple "home-cooked" classes. Can you recommend a module that works well in most cases ? Which modules have you had positive experience with ? Negative ? Thanks in advance

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • Remove Duplicates from JavaScript Array

    - by kramden88
    This seems like such a simple need but I've spent an inordinate amount of time trying to do this to no avail. I've looked at other questions on SO and I haven't found what I need. I have a very simple JavaScript array such as peoplenames = new Array("Mike","Matt","Nancy","Adam","Jenny","Nancy","Carl"); that may or may not contain duplicates and I need to simply remove the duplicates and put the unique values in a new array. That's it. I could point to all the codes that I've tried but I think it's useless because they don't work. If anyone has done this and can help me out I'd really appreciate it. JavaScript or jQuery solutions are both acceptable.

    Read the article

  • How do I debug into an ILMerged assembly?

    - by Rory Becker
    Summary I want to alter the build process of a 2-assembly solution, such that a call to ILMerge is invoked, and the build results in a single assembly. Further I would like to be able to debug into the resultant assembly. Preparation - A simple example New Solution - ClassLibrary1 Create a static function 'GetMessage' in Class1 which returns the string "Hello world" Create new console app which references the ClassLibrary. Output GetMessage from main() via the console. You now have a 2 assembly app which outputs "Hello World" to the console. So what next..? I would like to alter the Console app build process, to include a post build step which uses ILMerge, to merge the ClassLibrary assembly into the Console assembly After this step I should be able to: Run the Console app directly with no ClassLibrary1.dll present Run the Console app via F5 (or F11) in VS and be able to debug into each of the 2 projects. Limited Success I read this blogpost and managed to achieve the merge I was after with a post-build command of... "$(ProjectDir)ILMerge.bat" "$(TargetDir)" $(ProjectName) ...and an ILMerge.bat file which read... CD %1 Copy %2.exe temp.exe ILMerge.exe /out:%2.exe temp.exe ClassLibrary1.dll Del temp.exe Del ClassLibrary1.* This works fairly well, and does in fact produce an exe which runs outside the VS environment as required. However it does not appear to produce symbols (.pdb file) which VS is able to use in order to debug into the code. I think this is the last piece of the puzzle. Does anyone know how I can make this work? FWIW I am running VS2010 on an x64 Win7 x64 machine.

    Read the article

  • What to use for version control with Visual Studio 2008 for inhouse projects?

    - by Boog
    We want to put a number of our in-house projects under version control. Our projects are C# .NET applications and assemblies. We originally decided to go Microsoft all the way (as is the norm around here), and tried installing Visual Studio Team Foundation Server. To say the least, it was way more trouble of trying to get a successful install than it's worth, and I'm afraid we don't have a entire server to dedicate to TFS itself, as the installer seems to insist. We also considered a generic solution like Subversion or CVS, or possibly some kind of free online hosting that doesn't make our source publicly available under some license like Google Code appears to. Does anyone have any suggestions for us that would best fit our in-house MS environment? We'd also get some benefit out of some kind of project management tools if they were nicely integrated in to the solution, but this would only be a perk. I should also mention that we haven't entirely ruled out TFS, but it's looking like a pain, so anything you guys have to say for or against it would be helpful.

    Read the article

  • What good timesheet should and shouldn't have for a small (non programming) 50 people company?

    - by MadBoy
    I'm sure most people here had to fill at least one time sheet in their life that made their life miserable, hell it's even the worst time taker ever especially after you have to fill it in hurry. I will be developing some simple TimeSheet application for a small company of 50 people (non programming related, it's actually 4 companies working together) and would like it to be user friendly and as less disturbing as possible. So what in your opinion makes it a good timesheet (lack of it doesn't count :p), what data it should store? Should be only hours per day with possibility to choose project, company and simple overview what you have worked on like: Day 1, 3:00, 'Company 1', 'Project5', 'Name', Short Overview Day 1, 5:00, 'Company 2', 'Project6', 'Name', Short Overview Or should it gather more data? Would it be realy bad if it were an WinForms application considering that I don't know ASP.NET or any other web based language? I would be deploying it using ClickOnce or so.

    Read the article

  • functions inside or outside jquery document ready

    - by Hans
    Up until now I just put all my jQuery goodness inside the $(document).ready() function, including simple functions used in certain user interactions. But functions that don´t require the DOM document to be loaded or are only called afterwards anyway, can be placed outside the $(document).ready() as well. Consider for example a very simple validation function such as: function hexvalidate(color) { // Validates 3-digit or 6-digit hex color codes var reg = /^(#)?([0-9a-fA-F]{3})([0-9a-fA-F]{3})?$/; return reg.test(color); } The function is only called from within the $(document).ready() function though. What is best practice (syntax, speed); placing such a function inside or outside the jquery document ready function?

    Read the article

  • Issues with intellisense, references, and builds in Visual Studio 2008

    - by goober
    Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give!

    Read the article

  • Typecasting a floating value or using the math.h floor* functions?

    - by nobody
    Hi, I am coding up an implementation of Interpolation Search in C. The question is actually rather simple, I need to use the floating operations to do linear interpolation to find the correct index which will eventually be an integer result. In particular my probe index is: t = i + floor((((k-low)/(high-low)) * (j-i))); where, i,j,k,t are unsigned ints, and high,low are doubles. Would this be equivalent to: t = i + (unsigned int)(((k-low)/(high-low)) * (j-i)); Is there any reason I would actually want to use math.h floor* functions over just a simple (int) typecast?

    Read the article

  • Git + GitHub + Heroku

    - by Haseeb Khan
    Hi All, I am new to the world of Git, GitHub and Heroku. So far, I am enjoying this paradigm but coming from a background with SVN, things seems a bit complicated to me in the world of Git. I am facing a problem for which I am looking for a solution. Scenario: I have setup a new private project on GitHub. I forked the private project and now I have the following structure in my branch: /project /apps /my-apps /my-app-1 .... /my-app-2 .... /your-apps /your-app-1 .... /your-app-2 .... /plugins .... I can commit the code in my Fork on GitHub from my machine in any of the folders I want. Later on, these would be pulled into the master repository by the admin of the project. For every individual application in the apps folder, I have setup an app on Heroku which is a Git Repo in itself where I push my changes when I am done with the user stories from my local machine. In short, every app in the apps folder is a Rails App hosted on Heroku. Problem: What I want is that when I push my changes into Heroku, they can be committed into my project fork on GitHub as well, so, it also has the latest code all the time. The issue I see is that the code on Heroku is a Git Repo while the folders which I have on GitHub are part of a Repo. So far, what I have researched is that there is something known as Submodule in the Git World which can come to the rescue, however, I have not been able to find some newbie instructions. Can someone in the community be kind enough to share thoughts and help me to identify the solution of this problem? Thanks in advance. Regards, Haseeb Khan haseeb [AT] tkxel.com TkXel

    Read the article

  • Structuring input of data for localStorage

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject(files): Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • How do I find and open a file in a Visual Studio 2005 add-in?

    - by Charles Randall
    I'm making an add-in with Visual Studio 2005 C# to help easily toggle between source and header files, as well as script files that all follow a similar naming structure. However, the directory structure has all of the files in different places, even though they are all in the same project. I've got almost all the pieces in place, but I can't figure out how to find and open a file in the solution based only on the file name alone. So I know I'm coming from, say, c:\code\project\subproject\src\blah.cpp, and I want to open c:\code\project\subproject\inc\blah.h, but I don't necessarily know where blah.h is. I could hardcode different directory paths but then the utility isn't generic enough to be robust. The solution has multiple projects, which seems to be a bit of a pain as well. I'm thinking at this point that I'll have to iterate through every project, and iterate through every project item, to see if the particular file is there, and then get a proper reference to it. But it seems to me there must be an easier way of doing this.

    Read the article

  • SQL Server 2008 log size management problems

    - by b0x0rz
    I'm trying to shrink the log of a database AND set the recovery to simple, but always there is an error, whatever i try. USE 4_o5; GO ALTER DATABASE 4_o5 SET RECOVERY SIMPLE; GO DBCC SHRINKFILE (4_o5_log, 10); GO the output of sp_helpfile says that log file is located under (hosted solution): I:\dataroot\4_o5_log.LDF please help me perform this operation as the log file got large when importing a lot of data and now this info is no longer needed, have multiple (lots of) backups since then. the exact error message when performing the query above is: incorrect syntax near '4'. RECOVERY is not a recognized SET option. incorrect syntax near _5_log'. i am using visual studio 2010 (also have SQL Server Express installed locally, SQL Server 2008 proper installed at provider (shared)) thnx a lot

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Looking to eventually build a smartphone app...

    - by Brian L
    Hi all- I'm a young, inexperienced programmer (I've had a year of Java, some MATLAB, and HTML/CSS in school) but I've decided that to get better, I'm making it my goal to produce a simple smartphone app of some kind this year- probably either webOS or Android since I'm on a PC and can't afford a Mac just to write an iPhone app. So my question is, where do I start? I've read the threads about How to Write for Android and such, but I'm not sure I have enough Java experience to just jump right in. And then there's webOS which is based on JavaScript, correct? I guess I'd just like some input from more experienced folks. I have some Barnes and Noble book credit I'm looking to get rid of too, so if there's a guide you think would be useful, feel free to request. tl;dr: Programming newbie ultimately wants to build a simple smartphone app. Where do I start?

    Read the article

  • ASP.NET: CSS friendly calendar control

    - by pbz
    I'm using the built-in Calendar control. It works, but in a few places the way the HTML is rendered is broken or not CSS-friendly and unfortunately cannot be changed (it's hard coded). I was hoping they would fix this in .NET 4.0, but as far as I can tell the Calendar control hasn't been changed at all. Also, as far as I know, there's no CSS adapter for the Calendar control. So, I would need a control that would: Allow me to customize the content of each cell (like OnDayRender works) Allow me to assign CSS classes to any HTML it may render Not render anything automatically that cannot be turned off, except layout code No auto-postback or auto-JS code (I can handle these by hand using simple links or custom JS calls) Basically a simple calendar view control that would give me full rendering control. What would you recommend? Thanks!

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >