Search Results

Search found 29191 results on 1168 pages for 'joel in go'.

Page 24/1168 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How do you go about understanding the source code of an Open source project?

    - by Anirudh Vemula
    I am planning to contribute code through patches to some open source organisations to become more aware of open source development. I have chosen some organisations but when I download their source code, I don't seem to understand even a bit of it. How do I go about understanding their source code? I tried going through resolving a bug but finding the place in the source code where the bug is present is also difficult when you have no idea about how the code is structured and implemented. I need help on this so I can start working on an open source code.

    Read the article

  • nVidia GeForce Go 7600? can it ever run unity?

    - by Khaled Musleh
    my laptop Toshiba Qosmio G30 has nVidia GeForce Go 7600 card and it suppose to support 3D . i run unity 2d now . I run 12.04 and the graphic driver is--VESA: G73 Board - toshg73m-- by UBUNTU. when i run /usr/lib/nux/unity_support_test -p then i get this list Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no the card is not blacklisted but a similar one with GT is! Do you think that there is a chance the laptop can run the unity 3d? and may be i could change the resolution of the screen to a higher one too! I tried all the nvidia drivers provided but none works (except 96 in ubuntu 12.04 ). i get a black screen or terminal screen. best wishes to all

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • How do I go from a simple html5 tic tac toe game to an online 2 player game?

    - by phi1o
    I've been working on an online 2 player Tic Tac Toe solution for blackberries. both old and new. And so far I have html5 code that has a 3 x 3 layout that switches between x and o for the game mechanics. I believe I'm still missing a check for win function but my question is about the server side of this game. I'm not sure how to go about learning what exactly I want. how do you take what I have now, and make this into a functioning online game? I've been told WAMP is a good solution, as well as IIS. and its all really over my head, so i'm hoping to get a little more clarity as far as what I should focus on to bring this game to life.

    Read the article

  • What is the most secure way to "Grandfather In" existing users of a paid iOS app that will go free?

    - by coneybeare
    The title pretty much says it all, but I can elaborate. I have a paid iOS app that has plenty of existing customers. I think i want to convert to a free app now, and allow full upgrade via in-app-purchase. The problem is, I don't want to make my existing customers buy the app again to use it, nor do I want to make it easy for hackers to just flip a switch and get the pro version. What is the most secure way to "Grandfather In" existing users of a paid iOS app that will go free?

    Read the article

  • How would I go about implementing a globe-like "ballish" map?

    - by rFactor
    I am new to 3D development and I have this idea of having the game world like our globe is - a ball. So, there would be no corners in the map and the game is top-down RTS game. I would like the camera to go on and on and never stop even though you are moving the camera to the same direction all the time. I am not sure if this is even possible, but I would really like to build a globe-like map without borders. Can this be done, and how exactly? I am using XNA, C#, and DirectX. Any tutorials or links or help is greatly appreciated!

    Read the article

  • How to go about "taking over" an open-source project?

    - by LuxuryMode
    There's an open-source project that I'm interested in and use regularly. It's licensed under the Apache License 2.0 and it has basically no activity any more. It's hosted on Google Code and I'm interested in continuing it's development. I'm new to the open-source process and I'm trying to figure out the appropriate way to go about this. Can I just check it out and push it to github so I can continue it's development in the open there? Should I contact the project "owner" first? Also, do I leave all the author information at the top of the classes, etc even though I'm going to be making changes..(I'm assuming the answer is yes)? Also, how do I practically adhere to the license requirement of "all modifications are clearly marked as being the work of the modifier"? Do I place a comment by every change I make?

    Read the article

  • Is it necessary to connect via Cisco VPN using my university account to go online?

    - by stankowait
    Original title: Bug with Xisco VPN - it it necessary to connect via Cisco VPN using my university-account in order to go online for all wireless networks In order to gain wireless access to my university network, I had to download and install the Cisco VPN client. It worked fine under 11.10 and did so for two weeks on 12.04. But since yesterday, I am unable to connect to my wireless network at home. First I have to connect via Cisco VPN using my university account. This is quite annoying and I'm unable to download apps via the software center when using the Cisco VPN client. I really don't know what happened, because it worked fine for two weeks and I did not change a thing.

    Read the article

  • Is JScript dying? If so, where should I go? [closed]

    - by David Is Not Here
    I recently poked around Google for a little bit, looking for information about coding JScript. It's very sparse, which surprised me -- it took a link to a link to find Microsoft's own reference, which appears to omit most if not all references to console-based scripting that extends past Javascript. I'm working with the console here, not a webpage, so input and output seems very different than what Microsoft explains. If JScript is dying (and it appears to be so), where do I go from here? VBScript? My options are limited because the computers I'm using this on are carefully patrolled for new software. JScript's similarity to JavaScript was the biggest reason I had chosen it for porting over some of my prior work. I'm specifically looking for, at best, a console scripting language that doesn't need any extra software on Windows XP or higher, that at least supports standard input, output, pause, and file manipulation, little else.

    Read the article

  • Quels sont les avantages du Cloud Computing pour votre entreprise ? Google lance "Go Google Cloud Ca

    Quels sont les avantages du Cloud Computing pour votre entreprise ? Google lance "Go Google Cloud Calculator" pour en appréhender les bénéfices Le cloud computing intéresse de plus en plus d'entreprises, et des millions d'entre elles se sont tournées vers Google en migrant vers les Google Apps. Une décision encore audacieuse, tant il est difficile d'imaginer ce que le travail « dans le nuage » signifie vraiment et quels en sont les véritables avantages. Quel est l'impact de la collaboration en ligne sur votre lieu de travail ? Comment une capacité de stockage d'email accrue ou une messagerie instantanée intégrée et le chat vidéo pourraient avoir un impact sur la productivité de votre entreprise ?

    Read the article

  • Google Drive : meilleure intégration avec le nouveau Gmail, la taille maximale des pièces jointes passe à 10 Go

    Google Drive : meilleure intégration avec le nouveau Gmail La taille maximale des pièces jointes passe à 10 Go Google vient de mettre à jour son espace de stockage en ligne Google Drive. Le but est de l'intégrer de manière beaucoup plus intime avec Gmail en permettant d'insérer les documents depuis Drive directement dans un e-mail sans quitter le compte de messagerie. La manoeuvre est on ne peut plus simple avec l'arrivée d'une nouvelle icône en bas de la fenêtre « nouveau message » pour insérer les documents hébergés dans le Cloud de Google. [IMG]http://ftp-developpez.com/gordon-fowler/Nveau%20Gmail%20et%20G%20Drive.png[/IMG] Parmi les avan...

    Read the article

  • Best advice for game programmer who wants to go indie?

    - by JStriedinger
    So. I'm working right now as an intern in a mobile game development company. I've used Unity quite a lot for 1 year now but, that's about all the experience I have with game design/development. Here's the things. I wanna go indie, the main reason is for fun, I really enjoy games and by making indie games I believe I can let my imagination fly and make personal personal stuff. Unfortunately I...I just don't know where to start! I'm interestes in making mobile and web games so what...should I download Stencyl? Construct 2?...XCODE for iOS, maybe a great plugin for Unity would be fine? What whould be your single best advice for someone like me? (programmer and interestes in design) :)

    Read the article

  • What languages should I learn before I go to college? [closed]

    - by CLUEL3SS
    I've been working with PHP, MySQL, and some html for the past 3-4 years now just as a hobby. I'm only 19 and college is going to come soon, I want to go for Web and Software development and/or Network Security and Administration, I know the networking is a whole different ballpark, but as for programming, which languages do you suggest I get under my belt before college? I was thinking the C languages (C++, C#), Java, .NET; Should I learn any more Server Side Scripting languages? Python, Perl, Ruby? I used to be somewhat familiar with writing Java, but haven't written in Java in a good while, what would you suggest? Thanks!

    Read the article

  • What languages should I leaarn before I go to college? [closed]

    - by CLUEL3SS
    I've been working with PHP, MySQL, and some html for the past 3-4 years now just as a hobby.. I'm only 19 and college is going to come soon, I want to go for Web and Software development and/or Network Security and Administration, I know the networking is a whole different ballpark, but as for programming, which languages do you suggest I get under my belt before college? I was thinking the C languages (C++, C#), Java, .NET; Should I learn any more Server Side Scripting languages? Python, Perl, Ruby? I used to be somewhat familiar with writing Java, but haven't written in Java in a good while, what would you suggest? Thanks!

    Read the article

  • Yahoo sitemap validation

    - by Joel
    Hello, I am trying to submit sitemap.xml (index) to Yahoo Site Explorer but with no luck. I tried using website feed option in the site explorer to submit the sitemap, but I got validation errors. However, when submitting the same sitemap to google webmaster tools, the sitemap was validated successfully. Could it be for the fact that I am using sitemap with image tag: <image:image> <image:loc>http://www.domain.com/pic.jpg</image:loc> <image:title>picture</image:title> </image:image> When I tried validating the sitemap with inline tools such as http://www.xml-sitemaps.com/validate-xml-sitemap.html and http://www.w3.org/2001/03/webdata/xsv the error I received was: Attempt to load a schema document from http://www.google.com/schemas/sitemap-image/1.1 (source: new namespace) for http://www.google.com/schemas/sitemap-image/1.1, failed: Not recognised as W3C XML Schema or RDDL: html However, the declaration of the sitemap I use in the top of the document is the same as suggested by Google on their official page at http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=178636 : <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:image="http://www.google.com/schemas/sitemap-image/1.1"> <url> Any ideas how to resolve this issue? Thanks, Joel Thanks, Joel

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Should i scrap my own leader board and go for the Facebook built in one?

    - by Magnus Johansson
    Currently I'm rolling my own score and leader board functionality in my FB canvas game. In my game, users can see what score they have, in addition I have a public leader board where everybody can see all scores from all other users.(I also have possibility for each user to set themselves as anonymous in the leader board, if desired) But now I started thinking; why do I have my own leader board system? Facebook has this scores API and I started play around with it. It, of course, integrates well with Facebook, scores and achievement showing up in the ticker and what not. But it seems that I can't let each user see a public leader board in much the same way I currently have it. But it do let the users see their friends score. Let's face it, this is all what FB is all about, right? Friends. So the question is; should i scrap my own leader board and go for the Facebook built in one (and skip the public part of it)? My gut feeling says yes, but I wanted to hear what other thinks.

    Read the article

  • Where did ULSTraceLog go to in the SharePoint 2010 Logging Database?

    - by Jan Tielens
    The Logging Database is one of the many new concepts that will make the life of many SharePoint administrators quite a bit more enjoyable. In SharePoint 2007 the Unified Logging System (ULS) logged all of its data to text files, typically found on your SharePoint server in 12\LOGS. We still have that in SharePoint 2010, but besides those text files, ULS can also write the data to a database! The advantages are obvious: easy to query, one central location for all servers in the farm, easy to build reports etc. You can find this ULS data in the SharePoint 2010 logging database (typically called WSS_Logging), in the view ULSTraceLog. Quite recently on one of my demo machines (standalone installation on Windows 7) I noticed the ULSTraceLog view was not available in the logging database. It turned out that there is a Timer Job that’s responsible for writing the data to the database, when the Timer Job hasn’t executed, the view is not there (the first time it executes, the view is created). Even more, the timer job was disabled, so the view would never be created, nor any data would be written to the database. If you encounter this situation as well, it’s quite easy to solve: Open the SharePoint Central Administration site Navigate to the Monitoring section Select Review Job Definitions Click on the job with the name Diagnostic Data Provider: Trace Log Click on the Enable button to enable it Optionally click on Run Now afterwards, to start it immediately There you go, the ULSTraceLog will be created and the ULS messages will appear in the database!

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • Should I go with OpenGL to see my future in Game Development industry? [closed]

    - by Priyank
    Possible Duplicate: Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry? I tried Google but found quite old articles, so I am in search of an answer in context to year 2012. Hi all, I don't know if you will consider this question appropriate for this community but I am constantly searching for a perfect answer. What I have seen is that most of the games that are released these days are DirectX 1x based. Except for few games like Starcraft or Diablo which don't have high end graphics are using OpenGL. So I have few questions to ask. The platforms i would like to target are PC (windows), Xbox 360 and PS3 (must). Should I go with learning OpenGL to see my future in game development industry? Or should I shift to Directx? If I learn OpenGL first, will it be difficult to learn direcx then? Which API is most suitable for indie development? Which one of the two API's are better from coder's (programmer's) point of view? Like OOP and style of coding. Is openGL being cross platform should be the only reason to choose it over Directx? Even when vendors are not providing enough stable drivers for it. Thanks in advance. I have read this post, but I have few questions. Should I continue studying OpenGL or just switch to DirectX to give me a better chance of landing a job in the game industry?

    Read the article

  • Having a Proactive Patch Plan is the way to Go!

    - by user793553
    BUILDING A SUCCESSFUL PATCHING STRATEGY Make Patching Easy! Having a Patching Strategy for your E-Business Suite system is a great way to manage your system downtime, identify the proper resources needed to perform the necessary task and familiarizing yourself with the Patching Tools in EBS. Having a Proactive Patch Plan is the way to Go! Proactive Patching is a preventive measure allowing you to have a complete patching strategy when applying patches periodically. Oracle provides several tools to help you get started to set the foundation for a solid and proactive patching strategy in Note 313.1 - "Patching & Maintenance Advisor: E-Business Suite 11i and R12". It details all the steps and tooling available for the patching strategy along with the benefits. Among other things it covers the following: How to plan ahead for system downtime Patching Tools in E-Business Suite (Autopatch, OUI, OPatch) How to Identify Patches (RUPs, EBS Family Packs, Critical Patch Updates, etc) How to properly test your patching plan and move to Production Make sure you visit the New E-Business Patching Community! We encourage you to access the "E-Business Patching Community" prior to applying an E-Business Suite patch. Doing so will allow you to explore perspectives shared by industry peers, get real-world experiences with the patch, and benefit from known solutions and lessons learned. Additionally, Oracle Support engineers monitor discussion topics to help provide guidance and solutions for your E-Business Suite patching needs. This is a valuable opportunity to "Get Proactive" with the patching and maintenance of your E-Business Suite environment. Start now, and find fast, proactive resolutions before you begin. Related Articles: What's the Best Way to Patch an E-Business Suite Environment? Patch Wizard Utility

    Read the article

  • Open Source Project all dressed up but nowhere to go...

    - by Calanus
    Over the past 2 years myself and a colleague have built an online statistical analysis application using a mixture of silverlight, wcf and R. I (a c# programmer) wrote all the silverlight and wcf stuff whilst my colleague (a statistician) came up with the stats algorithms and wrote the R code. Now we think that this app is fairly unique - a rich gui online statistics application that is much more intuitive than all the other online stat apps that I've seen. But despite this we don't really know where to go with the project, mainly for the following reasons: 1) Its fairly complicated stuff - without the mix of programing and stats skills it would be difficult for anyone to "get into" the project and contribute. 2) We are stalled by a lack of a proper place to host the site. Currently it sits on the family windows 7 media centre, not exactly the best place to host it as it could interfere with the missus trying to watch Corrie/Friends/Oprah etc. Soo, anyone got any ideas on how to move forward with this? I guess that my strength is programing not marketing so despite working hard at this for the past couple of years I feel that I've reached a dead end! Also, does anyone know of any free windows hosting for open source projects? If I could find a proper place to put the app I might feel re-energised about the whole thing. The source code is on codeplex at: http://silverstats.codeplex.com, whilst the app is currently hosted at http://silverstats.co.uk

    Read the article

  • Why does my mic boost automatically go to 100 on every boot?

    - by Ben
    When my computer turns on, it automatically sets the "mic boost" sound setting to 100. This causes a loud static sound in my speakers. I can manually go to alsamixer and turn the mic boost down manually, but I would prefer it if I didn't have to do this every time I turn the computer on. I've tried running sudo alsactl store after fixing the settings, and this does save them, but I have to run sudo alsactl restore to restore the settings. This means that I have to manually fix the sound every time I start the computer anyway, so it isn't really a fix. I tried putting sudo alsactl restore in my startup programs, but that didn't seem to fix anything. I'm running Ubuntu 12.04, but I started having this problem before upgrading from 11.10. I'm using a Sony Vaio laptop. I'm not really sure what made it start; it seemed like I just started having the problem randomly one day. Any help would be appreciated! Edit: here is the output from running amixer: http://paste.ubuntu.com/1060080/

    Read the article

  • Do Loops kind of Reset every time you go through it?... [closed]

    - by JacKeown
    #include <iostream> using namespace std; int main (void) { cout << " 1\t2\t3\t4\t5\t6\t7\t8\t9" << endl << "" << endl; for (int c = 1; c < 10; c++) { cout << c << "| "; for (int i = 1; i < 10; i++) { cout << i * c << '\t'; } cout << endl; } return 0; } Hey so this code produces a times table...I found it on Google Code's C++ class online...I'm confused about why "i" in the second for loop resets to 1 every time you go through that loop...or is it being declared again in the first parameter? Thanks in advance!

    Read the article

  • C Programming arrays, I dont understand how I would go about making this program, If anyone can just guide me through the basic outline please :) [on hold]

    - by Rashmi Kohli
    Problem The temperature of a car engine has been measured, from real-world experiments, as shown in the table and graph below: Time (min) Temperature (oC) 0 20 1 36 2 61 3 68 4 77 5 110 Use linear regression to find the engine’s temperature at 1.5 minutes, 4.3 minutes, and any other time specified by the user. Background In engineering, many times we measure several data points in an experiment, but then we need to predict a value that we have not measured which lies between two measured values, such as the problem statement above. If the relation between the measured parameters seems to be roughly linear, then we can use linear regression to find the relationship between those parameters. In the graph of the problem statement above, the relation seems to be roughly linear. Hence, we can apply linear regression to the above problem. Assuming y {y0, y1, …yn-1} has a linear relation with x {x0, x1, … xn-1}, we can say that: y = mx+b where m and b can be found with linear regression as follows: For the problem in this lab, using linear regression gives us the following line (in blue) compared to the measured curve (in red). As you can see, there is usually a difference between the measured values and the estimated (predicted) values. What linear regression does is to minimize those differences and still give us a straight line (blue). Other methods, such as non-linear regression, are also possible to achieve higher accuracy and better curve fitting. Requirements Your program should first print the table of the temperatures similar to the way it’s printed in the problem statement. It should then calculate the temperature at minute 1.5 and 4.3 and show the answers to the user. Next, it should prompt the user to enter a time in minutes (or -1 to quit), and after reading the user’s specified time it should give the value of the engine’s temperature at that time. It should then go back to the prompt. Hints •Use a one dimensional array to store the temperature values given in the problem statement. •Use functions to separate tasks such as calculating m, calculating b, calculating the temperature at a given time, printing the prompt, etc. You can then give your algorithm as well as you pseudo code per function, as opposed to one large algorithm diagram or one large sequence of pseudo code.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >