Search Results

Search found 12310 results on 493 pages for 'andrew day'.

Page 432/493 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Creating a RESTful API - HELP!

    - by Martin Cox
    Hi Chaps Over the last few weeks I've been learning about iOS development, which has naturally led me into the world of APIs. Now, searching around on the Internet, I've come to the conclusion that using the REST architecture is very much recommended - due to it's supposed simplicity and ease of implementation. However, I'm really struggling with the implementation side of REST. I understand the concept; using HTTP methods as verbs to describe the action of a request and responding with suitable response codes, and so on. It's just, I don't understand how to code it. I don't get how I map a URI to an object. I understand that a GET request for domain.com/api/user/address?user_id=999 would return the address of user 999 - but I don't understand where or how that mapping from /user/address to some method that queries a database has taken place. Is this all coded in one php script? Would I just have a method that grabs the URI like so: $array = explode("/", ltrim(rtrim($_SERVER['REQUEST_URI'], "/"), "/")) And then cycle through that array, first I would have a request for a "user", so the PHP script would direct my request to the user object and then invoke the address method. Is that what actually happens? I've probably not explained my thought process very well there. The main thing I'm not getting is how that URI /user/address?id=999 somehow is broken down and executed - does it actually resolve to code? class user(id) { address() { //get user address } } I doubt I'm making sense now, so I'll call it a day trying to explain further. I hope someone out there can understand what I'm trying to say! Thanks Chaps, look forward to your responses. Martin p.s - I'm not a developer yet, I'm learning :)

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • Just 2 free months 2 learn or improve my skills

    - by microspino
    On the 30 of June I will leave my every day work to start as freelance developer. I'd like to set a period of 2 months apart to improve my dev skills. At work I code in C# and during my spare time I enjoyed building Ruby on Rails web applications and creating some Arduino prototypes. I'm something more than junior but I don't feel really a senior developer because I never had a big corporate project built and designed by me with help of other juniors (although I don't think this is really a good definiton of a "senior", It helps describing my feelings). Using a scale from 0 (ignorant) to 10 (proficient like a "samurai") the list below describes my skills that I would like to improve with just 2 months. I've already bought some nice and updated books on all the subjects hereunder: The order doesn't matter C = 1 C# & .Net = 6 Arduino & Processing = 2 Ruby = 5 Rails = 5 HTML/XHTML/CSS = 9 Javascript = 6 Objective-C/iPhone dev = 2 Python = 4 Django = 4 Desing Patterns = 3 Algorythms = 3 Git = 5 I haven't included SQL or Databases in general nor Networking because I spent 10 years working in the past with them and I feel pretty solid for now. As an aside, I've made up some interest in Redis, Node.js, HTML5 reading about them on the web. After two months, since I have to pay my bills, I could go searching for some new job. If learning and developing were really good maybe I could also invest on something I gave birth during them. Can You give me some piece of advice on which you think It's better to improve or develop a learning project on (something like a "summer of code" thing)? The all point Is to see my weeknesses and work on them.

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • MySql "comments" parameter as descriptor?

    - by Nick
    So, I'm trying to learn a lot at once, and this place is really helpful! I'm making a little running log website for myself and maybe a few other people, and I have it so that the user can add workouts for each day. With each workout, I have a variety of information the user can fill out for the workout, such as running distance, time, quality of run, course, etc... I store this in a MySql database as a table with fields titled "distance", "time", "runquality", etc... Now, these field titles don't match up with what I want displayed on the running log, so I was thinking of using the "Comments" attribute for a field to store its human-readable title--thus the field "runquality" would have "Quality of run" as its comment, and then I would pull the comment with a SQL query and display it instead of the field name. Is this a good theoretical/practical way of going about it? And what sort of SQL would I use to pull the comment for the field anyway? Secondly, suppose I want to add the ability for the user to create their own workout descriptors. So say a user wants to add a "temperature" descriptor for their workout. Should I create a script that adds fields to my workout table, or should I create a separate table listing only workout descriptors and somehow link the descriptor table with the "contents" table? I haven't learned any theory about database design or anything so any help is appreciated!

    Read the article

  • Parsing HTML: Call to a member function > children() on a non-object

    - by sm56d
    Hello all, I was just helped with this question but I can't get it to move to the next block of HTML. $html = file_get_html('http://music.banadir24.com/singer/aasha_abdoo/247.html'); $urls = $html->find('table[width=100%] table tr'); foreach($urls as $url){ $song_name = $url->children(2)->plaintext; $url = $url->children(6)->children(0)->href; } It returns the list of the names of the first album (Deesco) but it does not continue to the next album (The Best Of Aasha)? It just gives me this error: Notice: Trying to get property of non-object in C:\wamp\www\test3.php on line 26 Fatal error: Call to a member function children() on a non-object in C:\wamp\www\test3.php on line 28 Why is this and how can I get it to continue to the next table element? I appreciate any help on this! Please note: This is legal as the songs are not bound by copyright and they are available to download freely, its just I need to download a lot of them and I can't sit there clicking a button all day. Having said that, its taken me an hour to get this far.

    Read the article

  • PHP strtotime without leap-years

    - by Christian Sciberras
    With regards to this thread, I've developed a partial solution: function strtosecs($time,$now=null){ static $LEAPDIFF=86400; $time=strtotime($time,$now); return $time-((date('Y',$time)-1968)/4*$LEAPDIFF); } The function is supposed to get the number of seconds given a string without checking leap-years. It does this calculating the number of leap-years 1970 [(year-1986)/4], multiplying it by the difference in seconds between a leap-year and a normal year (which in the end, it's just the number of seconds in a day). Finally, I simply remove all those excess leap-year seconds from the calculated time. Here's some examples of the inputs/outputs: // test code echo strtosecs('+20 years',0).'=>'.(strtosecs('+20 years',0)/31536000); echo strtosecs('+1 years',0).'=>'.(strtosecs('+1 years',0)/31536000); // test output 630676800 => 19.998630136986 31471200 => 0.99794520547945 You will probably ask why am I doing a division on the output? It's to test it out; 31536000 is the number of seconds in a year, so that 19.99... should be 20 and 0.99... should be a 1. Sure, I could round it all and get "correct" answer, but I'm worried about the inaccuracies.

    Read the article

  • OpenCL Matrix Multiplication - Getting wrong answer

    - by Yash
    here's a simple OpenCL Matrix Multiplication kernel which is driving me crazy: __kernel void matrixMul( __global int* C, __global int* A, __global int* B, int wA, int wB){ int row = get_global_id(1); //2D Threas ID x int col = get_global_id(0); //2D Threas ID y //Perform dot-product accumulated into value int value; for ( int k = 0; k < wA; k++ ){ value += A[row*wA + k] * B[k*wB+col]; } C[row*wA+col] = value; //Write to the device memory } Where (inputs) A = [72 45 75 61] B = [26 53 46 76] Output I am getting: C = [3942 7236 3312 5472] But the output should be: C = [3943 7236 4756 8611] The problem I am facing here is that for any dimension array the elements of the first row of the resulting matrix is correct. The elements of all the other rows of the resulting matrix is wrong. By the way I am using pyopencl. I don't know what I mistake I am doing here. I have spent the entire day with no luck. Please help me with this

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Need opinions on LaTeX and ever upgrading

    - by yCalleecharan
    Hi, I've been using LaTeX since 2005 with the TeXLive distribution and I've been upgrading as each new TeXLive distribution comes out. In the recent years I noticed an increase in new packages, updated packages and in one instance a new package bearing a different name replacing an old one by the same package author. A LaTeX document which relies heavily on packages and which has been produced a few years back may start to get some warnings and error messages on present-day LaTeX compilation. The primary reason I switched to LaTeX is because of its reliability and robustness to create big documents easily, not to mention the adorable typographic quality. With LaTeX one doesn't have to worry about how to open a docx in an old program supporting only doc for instance. Now, when there are so much continual changes in the packages in a LaTeX distribution, I tend to wonder when will this madness end. Not that having enhanced and new features are bad in packages, but not all updated packages are backward compatible. Eventually one would like to be able to compile a LaTeX file in 10 years time that he/she is working on at present and not get any compilation warnings/error messages due to some unpredictable behavior of updated packages or due to a package that has been cast-off from a LaTeX distribution. If I understand correctly CTAN do keep a database with all packages from different versions. I would like to know how you LaTeX users handle this issue. Thanks a lot...

    Read the article

  • Some images fails to load on Windows Server 2008

    - by Guffa
    I have an application running on a Windows Server 2008, that is processing uploaded images. Currently it successfully processes about 8000 images per day, creating 11 different sizes of each image. The problem that I have is that sometimes the application fails to load some images, I get the error "System.Runtime.InteropServices.ExternalException: A generic error occurred in GDI+.". The upload only accepts files with a JPEG extension (jpg/jpeg/jpe) or with a JPEG MIME type, and from what I can tell those images are really JPEG images. If I look at the image file in windows explorer on the server, it can successfully extract the thumbnail from the file, but if I try to open it, I get the error message "This is not a valid bitmap file, or it's format is not currently supported." from Paint. If I copy the image to my own computer, running Windows 7, there is no problem opening the image. It works in Paint, Windows Photo Viewer, Adobe Bridge, and Photoshop. If I try to load the image using Image.FromStream the same way as in the application running on the server, it loads just fine. (I have copied the file back to the server, and it still doesn't work, so there is nothing in the copying process that changes it.) When I look at the image information in Bridge, I see that the images are created using Picasa 3.0, but other than that I can't see anything special about them. I haven't yet found anyone having the same problem, or any known problems like this with the Picasa application. Has anyone had any similar problem, or know if there is something special about images created using Picasa? Is there any image codec that needs installing on the server to handle all kinds of JPEG images?

    Read the article

  • To OpenID or not to OpenID? Is it worth it?

    - by Eloff
    Does OpenID improve the user experience? Edit Not to detract from the other comments, but I got one really good reply below that outlined 3 advantages of OpenID in a rational bottom line kind of way. I've also heard some whisperings in other comments that you can get access to some details on the user through OpenID (name? email? what?) and that using that it might even be able to simplify the registration process by not needing to gather as much information. Things that definitely need to be gathered in a checkout process: Full name Email (I'm pretty sure I'll have to ask for these myself) Billing address Shipping address Credit card info There may be a few other things that are interesting from a marketing point of view, but I wouldn't ask the user to manually enter anything not absolutely required during the checkout process. So what's possible in this regard? /Edit (You may have noticed stackoverflow uses OpenID) It seems to me it is easier and faster for the user to simply enter a username and password in a signup form they have to go through anyway. I mean you don't avoid entering a username and password either with OpenID. But you avoid the confusion of choosing a OpenID provider, and the trip out to and back from and external site. With Microsoft making Live ID an OpenID provider (More Info), bringing on several hundred million additional accounts to those provided by Google, Yahoo, and others, this question is more important than ever. I have to require new customers to sign up during the checkout process, and it is absolutely critical that the experience be as easy and smooth as possible, every little bit harder it becomes translates into lost sales. No geek factor outweighs cold hard cash at the end of the day :) OpenID seems like a nice idea, but the implementation is of questionable value. What are the advantages of OpenID and is it really worth it in my scenario described above?

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • Installing Mercurial on Windows Apache XAMPP Tutorial

    - by Tim Dellas
    After asking this question (http://stackoverflow.com/questions/2675764/xampp-mercurial-installation-on-windows-apache-hgwebdir-cgi-script-error) and reading though the whole internet including this question (http://stackoverflow.com/questions/644322/how-do-i-get-mercurials-hgwebdir-working-on-windows) and all its links for about 10 hours, I seem to not be able to find a solution. I begun with this tutorial http://mercurial.selenic.com/wiki/HgWebDirStepByStep ... and I really don't want to install ancient versions of Mercurial. I got my windows-apache to run Python scripts, CGI-Scripts, publish them in the wild, but hgwebdir just won't work. Question 1: Can someone please enrich his personal blog with a tutorial on how to install MERCURIAL on a WINDOWS XAMPP installation and make it visible to the world? I guarantee a lot of pageviews, as this is not a trivial problem. And this would sincerely help a lot of other people I guess. Question 2: For example, even after browsing half a day through everywhere, I just cannot find out, which version of python I need to pair with the freshest version of mercurial, and I get the "magic number is wrong"-error. This would be my question, if noone has time to make up a nice blogpost. Sorry for being a bit frustrated.

    Read the article

  • jQuery Looking for a function to equalize heights of divs based on tallest image in a table.

    - by Jamie
    I am doing a site that will have a table with a bunch of images in them. They are user added images so I have no control over aspect ratio, what I do have control over is I can the force a max height/width and the image will not exceed this in either direction, but the height can (and will) fluctuate. <table class="pl_upload"> <tr class="row1"> <td class="col1"> <div> <div class="img-container"> <a href="#"><img src="img.jpg" width="182" alt="X" /></a> </div> </div> </td> ....... What I need ideally is to get the height of the tallest image and then make the all of the children divs the same height based on the height of the tallest one. I hope that makes sense, been a long day. I made a jsfiddle page for it here: http://jsfiddle.net/Evolution/27EJB/

    Read the article

  • Help with mysql sum and group query and managing results for jquery graph.

    - by Scarface
    I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary coordinates format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up, 2=vote down), user_id, timestamp, topic_id What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query( "SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic ON topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval" ); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • How to learn a C++ GUI library effectively?

    - by Chan
    Hello everyone, I have many options for GUI in my head while searching in stackoverflow, but these are what I chose among others: Qt gtkmm GTK+ I used GTK+ couple years ago, and I felt so painful when using C API without string object and containers. I prefer C++ style, I then switched to C++ gtkmm, but the documentation was bad at that time. I found no help when encountering an issue. Now I want to give a hard try for Qt4, but I really want to know how to learn a GUI librarie effectively. With core C++, I usually pick up a problem and try to solve it in different ways using that particular technique, functionality. On the other hand, after skimming through the documentation from Qt site, I don't think this way of studying is applicable, since the GUI classes and APIs are so much bigger. Plus I'm still in school, so I won't have much time to play all the day long with it. How do you guys learn GUI before? Can anyone share some experiences how they learn thing, that would be an invaluable input for me! Best regards, Chan Nguyen

    Read the article

  • Draw a comparison between an integer in a specific row and a count

    - by XCoderX
    This is a follow-up question to this one: Check for specific integer in a row WHERE user = $name I want a user to be able to comment on my site for exactly five times a day. After this five times, the user has to wait 24 hours. In order to accomplish that I raise a counter in my MYSQL database, right next to the user. So where the name of the user is, there is where the counter gets raised. When it reaches 5 it should stop counting and reset after 24 hours. In order to check the time I use a timestamp. I check if the timestamp is older than 24 hours. If that is the case, the counter gets reseted (-5) and the user can comment again. In order to do that, I use the following code, however it never stops at five, my guess is that my comparison is wrong somehow: $counter = "SELECT FROM table VALUES CommentCounterReset WHERE Name = '$name'"; if(!isset($_SESSION['ts'])); { $_SESSION['ts'] = time(); } if ($counter >= 5) { if (time() - $_SESSION['ts'] <= 60*60*24){ echo "You already wrote five comments."; } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset-5 WHERE Name = '$name'"; } } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset+1 WHERE Name = '$name'"; echo "Your comment has been added."; }

    Read the article

  • How to accept confirmation Automatically in PowerShell for Outlook

    - by user2919845
    How to accept confirmation Automatically in PowerShell for Outlook I have script for Export attachments from email from Outlook - see next It works correctly on one PC, but on another PC is there a problem: Outlook gives message and wants answer: Permit Denny Help If I manually click on Permit or Denny it works correctly. I want to automate it. Can you give me some suggestion how to do it in PowerShell? I have tried to set Outlook to not give this message but I didn’t success. My script: # <-- Script ---------> # script works with outlook Inbox folder # check if email have attachments with ".txt" and save those attachments to $filepath # path for exported files - attachments $filepath = "d:\Exported_files\" # create object outlook $o = New-Object -comobject outlook.application $n = $o.GetNamespace("MAPI") # $f - folder „dorucena posta“ 6 - Inbox $f = $n.GetDefaultFolder(6) # 6 - Inbox # select newest 10 emails, from it olny this one with attachments $f.Items| select -last 10| Where {$_.Attachments}| foreach { # process only unreaded mail if($_.unread -eq $True) { # processed mail set as read, not to process this mail again next day $_.unread = $False $SenderName = $_.SenderName Write-Host "Email from: ", $SenderName # process all attachments $_.attachments|foreach { $a = $_.filename If ($a.Contains(".txt")) { Write-Host $SenderName," ", $a # copy *.txt attachments to folder $filepath $_.saveasfile((Join-Path $filepath "$a")) } } } } Write-Host "Finish" # <------ End Script ---------------------------------->

    Read the article

  • How do I get flashvars in Firefox using object embed tag only to work?

    - by Liam
    I am trying to generate an <object> tag only embed code and cannot get Firefox to pass Flash along the FlashVars values. This seems to work everyplace else that I've tried it but fails in Firefox. Here is a sample of the embed that I'm using: <object type="application/x-shockwave-flash" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=10,0,0,0" width="550" height="400" id="Main" align="middle"> <param name="movie" value="Main.swf" /> <param name="allowScriptAccess" value="always" /> <param name="allowFullScreen" value="true" /> <param name="bgcolor" value="#ffffff" /> <param name="quality" value="high" /> <param name="menu" value="false" /> <param name="FlashVars" value="foo=1" /> </object> Please note that the Flash experience does show up in Firefox but when I do traces and actually run the application this fails to read the values. This has had me scratching my head for a day and I'm pretty stumped. If anyone has any guidance on this it would relly be appreciated.

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • hosting a high traffic facebook app (game)

    - by z3cko
    we are currently developing a high traffic facebook application. all the traffic will be within one month, where there are 500.000 to 1.000.000 expected users. after that month, the game is over and we have a winner - so the app will be archived. we are currently planning to develop the application with ruby on rails and searching for hosting options that can deal with the traffic. the problem is not so much the users, but the peak values: we will have around 500.000 requests coming daily within a short timeframe (lets say within 3 minutes in the worst case) we are expecting 500.000 to 1.000.000 users of the application, with peaks at 1:00pm (timezone GMT+1), where most (up to 80% of the users) will send most of the requests. the requests are from 11th of june to 11.july - after that, the app/game is closed/over. we are currently developing an aggressive caching mechanism - currently we are thinking about 2 or 3 small apps/webservices, that will handle the load. the load is distributed as follows: a) main application, cached data (11 screens, 200k each) b) voting: every day until 1:00pm (timezone GMT+1) - every user votes with about 10k data sent, high concurrent peak values! questions: is there any specific application setup that is recommendable? are there any hosting partners that can be recommended? thanks!

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Developing a 2D Game for Windows Phone 8

    - by Vaccano
    I would like to develop a 2D game for Windows Phone 8. I am a professional Application Developer by day and this seems like a fun hobby. But I have been disapointed trying to get going. It seems that 2D games (far and away the majority of games) do not have any real support. It seems the Windows Phone makers did not include support for Direct2D. So unless you are planning to make a fully 3D app, you are out of luck. So, if you just wanted to make a nice 2D app, these are your choices: Write your game using Xaml and C# (Performance Issues?) Write your game using Direct3D and but only draw on one plane. Use the DirectX Took Kit found on codeplex. It allows you to use the dying XNA framework's API for development. Number 3 seems the best for my game. But I hate to waste my time learning the XNA api when Microsoft has clearly stated that it is not going to be supported going forward. Number 2 would work, but 3D development is really hard. I would rather not have to do all that to get the 2D effect. (Assuming Direct2D is easier. I have yet to look into that.) Number 1 seems the easiest, but I worry that my app will not run well if it is based off of xaml rendering rather than DirectX. What is the suggested method from Microsoft? And who decided that 2D games were going to get shortchanged?

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >