Search Results

Search found 58378 results on 2336 pages for 'real application clusters'.

Page 20/2336 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • My Delphi 7 application halts on Application.Initialize and does not return to next line

    - by m-abdi
    I have created an application on Delphi 7. my app had running fine since yesterday. I don't know what's happened yesterday which cause my application halts on Application.Initialize line in source code and does not return to next line when i trace the program. I can't run the created executable file from widows niether while the generated file does run on another machine correctly. here is the code where the compiler stops on it: program Info_Kiosk; uses SysUtils, Forms, ... (some other units) ; {$R *.res} begin Application.Initialize; Application.CreateForm(Tfrm_Main, frm_Main); any help would be appreciated

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [closed]

    - by DhruvPathak
    Possible Duplicate: How to find web hosting that meets my requirements? We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • How can I improve my real-time behavior in multi-threaded app using pthreads and condition variables

    - by WilliamKF
    I have a multi-threaded application that is using pthreads. I have a mutex() lock and condition variables(). There are two threads, one thread is producing data for the second thread, a worker, which is trying to process the produced data in a real time fashion such that one chuck is processed as close to the elapsing of a fixed time period as possible. This works pretty well, however, occasionally when the producer thread releases the condition upon which the worker is waiting, a delay of up to almost a whole second is seen before the worker thread gets control and executes again. I know this because right before the producer releases the condition upon which the worker is waiting, it does a chuck of processing for the worker if it is time to process another chuck, then immediately upon receiving the condition in the worker thread, it also does a chuck of processing if it is time to process another chuck. In this later case, I am seeing that I am late processing the chuck many times. I'd like to eliminate this lost efficiency and do what I can to keep the chucks ticking away as close to possible to the desired frequency. Is there anything I can do to reduce the delay between the release condition from the producer and the detection that that condition is released such that the worker resumes processing? For example, would it help for the producer to call something to force itself to be context switched out? Bottom line is the worker has to wait each time it asks the producer to create work for itself so that the producer can muck with the worker's data structures before telling the worker it is ready to run in parallel again. This period of exclusive access by the producer is meant to be short, but during this period, I am also checking for real-time work to be done by the producer on behalf of the worker while the producer has exclusive access. Somehow my hand off back to running in parallel again results in significant delay occasionally that I would like to avoid. Please suggest how this might be best accomplished.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • Get close window message in Hidden C# Console Application

    - by LexRema
    Hello everyone. I have a Windows Form that starts some console application in background(CreateNoWindow = rue,WindowStyle = ProcessWindowStyle.Hidden). Windows form gives me opportunity to stop the console application at any time. But I'd like to handle somehow the close message inside the console application. I tried to use hooking like: [DllImport("Kernel32")] public static extern bool SetConsoleCtrlHandler(HandlerRoutine handler, bool add); // A delegate type to be used as the handler routine // for SetConsoleCtrlHandler. public delegate bool HandlerRoutine(CtrlTypes ctrlType); // An enumerated type for the control messages // sent to the handler routine. public enum CtrlTypes { CTRL_C_EVENT = 0, CTRL_BREAK_EVENT, CTRL_CLOSE_EVENT, CTRL_LOGOFF_EVENT = 5, CTRL_SHUTDOWN_EVENT } private static bool ConsoleCtrlCheck(CtrlTypes ctrlType) { StaticLogger.Instance.DebugFormat("Main: ConsoleCtrlCheck: Got event {0}.", ctrlType); if (ctrlType == CtrlTypes.CTRL_CLOSE_EVENT) { // Handle close stuff } return true; } static int Main(string[] args) { // Subscribing HandlerRoutine hr = new HandlerRoutine(ConsoleCtrlCheck); SetConsoleCtrlHandler(hr, true); // Doing stuff } but I get the message inside ConsoleCtrlCheck only if the console window is created. But if window is hidden - I don't get any message. In my windows Form to close console application process I use proc.CloseMainWindow(); to send message to the console window. P.S. AppDomain.CurrentDomain.ProcessExit += CurrentDomain_ProcessExit; - also does not help Do you now other way to handle this situation? Thanks.

    Read the article

  • cfml error with application.cfc page

    - by tibin mathew
    Hi, I have some problem with my cfml website. I have used the below code in application.cfc file to connect with the dsn. But when ever i put this in my server, i'm getting error. i cant browse even a single test.cfm page. Is there any mistake in that code , any syntax error or something like that, will it be some problem with the dsn <cfset this.name = "0307de6.netsolhost.com"> <cfset this.sessionmanagement = true> <cfset this.loginstorage="session"> <cfset this.sessiontimeout = CreateTimeSpan(0,0,30,0)> <cfset this.applicationtimeout = CreateTimeSpan(2,0,0,0)> <cffunction name="onApplicationStart"> <cfscript> application.DSN = "hirerodsn"; application.dbUserName = "myusr"; application.dbPassword = "myd69!"; </cfscript> </cffunction> <cffunction name="onRequestStart"> <cfscript> request.DSN = "hirerodsn"; request.dbUserName = "myusr"; request.dbPassword = "myd69!"; </cfscript> </cffunction> please anyone help me

    Read the article

  • What java web application framework to use?

    - by frohiky
    One of the main products of my company is an Oracle Forms (and Reports) based application, that "needs" to be re-written in another technology. Why? Users want a more rich interface experience, and we want, preferably, to reduce costs with an open source application server. For this (HUGE) project, we intend to use a java web application framework, keep these points in mind: We have: hundreds of tables on our database (the ORM must be as flexible as possible); some logic which is (and will still be) based on PL/SQL procedures/functions/packages; a lot of CRUDs (the application itself is of an considerable size); a demand to work with/generate documents and workflows; an intranet based user environment; We want: to offer a RIA interface experience; use (if possible) an open source app server; a rapid (as possible) development framework; a somewhat mature framework with a "wise" roadmap (and a considerable community support); a MVC approach combined with JS or GWT widgets (e.g. Vaadin or SmartGWT); Well, in the past weeks I've read a lot of posts, Q&As on stackoverflow, and much more: Wicket, JSF, Tapestry, Grails, GWT, Struts2, Play, Spring, Seam, Echo, .... the list goes on! I've even researched about Alfresco..! The obvious question: Which one to use? At this time, any insight, recommendation, shared experience, advice will be more then welcome!

    Read the article

  • WPF : Access Application Resources when not referencing Shell from App.xaml

    - by CF_Maintainer
    I am beginner in WPF. My App.xaml looks like below app.xaml <Application x:Class="ContactManager.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Application.Resources> <Color x:Key="lightBlueColor">#FF145E9D</Color> <SolidColorBrush x:Key="lightBlueBrush" Color="{StaticResource lightBlueColor}" /> </Application.Resources> I do not set the startupuri since I want to a presenter first approach. I do the following in app.xaml.cs protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e); var appPresenter = new ApplicationPresenter( new Shell(), new ContactRepository()); appPresenter.LaunchView(); } I have a usercontrol called "SearchBar.xaml" which references "lightBlueBrush" as a staticResource. When I try to open "Shell.xaml" in the designer it tells me : The "shell.xaml" cannot be loaded at design time because it says it could not create an instance of type "SearchBar.xaml". When I debugged the devenv.exe using another visual studio instance it tells me that it does not have access to the Brush I created in app.resources. If one is doing a Presenter first approach, how does one access resources? I had this working when the startupURI was "Shell.xaml" and the startup event was not present. Any clues/ideas/suggestions. I am just trying to understand. Everything works as expected when I run the application just not @ design time.

    Read the article

  • Where to start with map application

    - by rfders
    Hi there, i'm trying to desing a new application which allow user see he/her current location on a custom map (office, university compus, etc). but actually i have a couple of question in my mind (i haven't designed this kind of application before). i'm wondering: How can i draw my own maps, what is the best option for it? there any format that i have to care of, there are any specification about it ? Once i have my custom map. how can i do to mapping a global position system with the local positions ? What are the tricks behing zoom on maps ? just differents layers with more or less informations and those layers changes on users demand ? If a whant to mark some specific points over the map, like a cafeteria, boss's office etc, how can i do that ? Sorry if my questions are too much generics and dumb, but i really need some clues about this topic because i don't have any idea how to design this kind of application as best as possible. and we don't whant to reinvent the wheel. I will appreciate any help that you can provide me in order to desing this application

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Real Life Pixar Lamp Can’t Get Enough Of Human Interaction

    - by Jason Fitzpatrick
    This curious lamp, powered by an Arduino board and servo motors, is just as playful as the on-screen counterpart that inspired its creation. The New Zealand Herald reports on the creation of the lamp, seen in action in the video above: The project is a collaborative effort by Victoria University students Shanshan Zhou, Adam Ben-Gur and Joss Doggett, who met in a Physical Computing class. The lamp’s movements are informed by a webcam with an algorithm working behind it. Robotics and facial recognition technology enable the lamp to search for faces in the images from its webcam. When it spots a face, it follows as if trying to maintain eye contact. How to Access Your Router If You Forget the Password Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor

    Read the article

  • JD Edwards World Reporting Made Easy with Real Time Reporting Tools from The GL Company

    Fred talks to Paul Yarwood, US Operations General Manager and Richard Crotty, North America Business Development Manager for The GL Company, an Oracle Certified Partner, and Denise Grills, Senior Director of Marketing and Product Strategy for Oracle's JD Edwards World products. They discuss how the finance department of JD Edwards World customers can have complete control over their management reporting with a true inquiry, consolidation, and reporting solution from The GL Company, freeing up the finance team from being dependent upon IT time and resources.

    Read the article

  • Get to Know the ‘Real’ Pyro [Humorous Team Fortress 2 Video]

    - by Asian Angel
    People sometimes wonder just what is going through Pyro’s head when he is spreading mayhem and destruction. If you are one of them, then here is your chance to see things from Pyro’s ‘unique’ point-of-view! Meet the Pyro [via Dorkly] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Real or False Recovery? Economic 'tea-leaves'

    - by [email protected]
    "Information-technology is allowing the city's economy to speak to us in lots of different ways," Mr. Egan said. "We just need to find new ways of listening." Source: "New Way to Read Economy" WSJ_ARTICLE  April 8th, Carli Tuna, Blog by ARC's Steve Banker Apr 12, 2010 Alan Greenspan used cardboard box purchases and other 'source-commodity' indicators. The Carli Tuna WSJ article said that truck diesel fuel sales are a reliable indicator. What factor do you and your company use as future forward indicators? .. is it quotes, perhaps calls into the call center or sales activity?  Is your business moving to the internet and your supply chain driven by your iStore?  How do your distributors, retailers and supply chain partners provide the 'side-line' signals to you to either ramp up or contract production? With competition being only one click away, organizations need to know with higher degrees of certainty, what the econmic 'tea-leaves' are telling us and how firms need to react with production and shipping forecasts.  Firms using the latest forecasting and supply chain analytical (Bus.Intelligence) tools and technologies appear to be leading their markets "Had we been aware of that data in 2008," Mr. Leamer said, "we would have made a different call." .        

    Read the article

  • Real-Time Strategy Gameplay

    - by Ahmad Alkhawaja
    I am working on building a HTML5 RTS game, and my current state is that I am building the Campaign mode of the game, and want to define the gameplay (The Scoring, Unit Behaviors/Attributes). I am searching for links/articles/books about how to define the gameplay, for me this: The scoring Figuring out levels of control (in any RTS game, there is units, individuals and squads) Unit action/attributes/properties point timing (how long it will take to play?) Achievements ..etc I want to see how they usually define these areas in RTS games, I expect to see general document discussing this concept that I can use to build the gameplay. Any idea? Is my question clear or I need to provide more details?

    Read the article

  • The Complexities to Creating Real Electronic Health Records

    <b>Linux Journal:</b> "But with all of this focus on streamlining and digitally electrifying health records, I began to wonder where did the Open Source community stand and where is its input? There is certainly a lot of money sitting out there for someone who wants to try to build the better mouse trap."

    Read the article

  • What's the real benefit of meta-modeling?

    - by Jakob
    After reading several texts about meta-modeling I still do not really get the practical benefit. Sometimes I think it is only an interesting mind game but no useful tool. Sure it is wise to clarify your modeling vocabulary: some may say class where others say entity or concept, but this is just simple documentation your modeling terminology. Meta-modeling, as I understand it, is more complex, as it tries to formalize and abstract modeling. Some good examples are Keet's formal comparison of conceptual data modeling languages (UML, ERM and ORM) from academia and the Meta Object Facility (MOF) from industry. To me MOF looks as impractical as CORBA, which was also created by OMG. In theory you could use meta-modeling to transform and integrate models in different modeling languages, but is anyone actually doing this?

    Read the article

  • Computer Science Degrees and Real-World Experience

    - by Steven Elliott Jr
    Recently, at a family reunion-type event I was asked by a high school student how important it is to get a computer science degree in order to get a job as a programmer in lieu of actual programming experience. The kid has been working with Python and the Blender project as he's into making games and the like; it sounds like he has some decent programming chops. Now, as someone that has gone through a computer science degree my initial response to this question is to say, "You absolutely MUST get a computer science degree in order to get a job as a programmer!" However, as I thought about this I was unsure as to whether my initial reaction was due in part to my own suffering as a CS student or because I feel that this is actually the case. Now, for me, I can say that I rarely use anything that I learned in college, in terms of the extremely hard math, algorithms, etc, etc. but I did come away with a decent attitude and the willingness to work through tough problems. I just don't know what to tell this kid; I feel like I should tell him to do the CS degree but I have hired so many programmers that majored in things like English, Philosophy, and other liberal arts-type degrees, even some that never went to college. In fact my best developer, falls into this latter category. He got started writing software for his church or something and then it took off into a passion. So, while I know this is one of those juicy potential down vote questions, I am just curious as to what everyone else thinks about this topic. Would you tell a high school kid about this? Perhaps if he/she already knows a good deal of programming and loves it he doesn't need a CS degree and could expand his horizons with a liberal arts degree. I know one of the creators of the Django web framework was a American Literature major and he is obviously a pretty gifted developer. Anyway, thanks for the consideration.

    Read the article

  • Dynamic real-time pathfinding with C# and unity

    - by Yakri
    A buddy and I are working on a simple 2D top down arena combat game similar to OpenGLAD (grew up on ye olde GLADIATOR). Thing is, we want to make some substantial deviation from our source of inspiration, including completely destructible/changeable terrain. Like rivers that can be frozen, walls which can be knocked down, etc. As well as letting players and NPC's build new terrain objects, some of which cannot be moved through or seen through. So I'm tasked with creating the AI, starting with pathfinding. Because of all the changeable terrain, we need something that can check to see if the player/other NPC's are in line of sight, and which can then check to find current paths around existing terrain, without getting completely confused by new terrain popping up, and old terrain vanishing, and even capable of breaking through terrain. A lot of that will just be filling in the framework of the feature, but I really just don't know where to start. What I'm really looking for are relevant websites, books, articles, or keywords to google. I just can't quite find a direction to start in, because most pathfinding types we've googled up just won't give us even the most basic level of robustness we need.

    Read the article

  • Game Engine with a real time renderer

    - by Maik Klein
    I am studying computer graphics since 3 semester and we just started with opengl. I really enjoy it and want to create my own little engine for learning purpose. I already read tons of different forum posts and saw the following engines. Panda3d, Ogre3d, NeoAxis, Irrlicht and Horde3d(graphics only). Now I don't want to use something like unity or cryengine because I want to start more lowlevel. Which of those engines is suited for realtime rendering? Something that cryengine offers - no baked lightmaps. Or at least gives me the option to add a realtime renderer?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >