Search Results

Search found 3040 results on 122 pages for 'saveing saving'.

Page 84/122 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Game Review: God of Light

    Luckily I came across this title at a very early stage. If I remember correctly, I took notice of God of Light on Twitter right on the weekend it has been published on the Play Store. "Sit back and become immersed into the world of God of Light, the game that rethinks the physics puzzle genre with its unique environment exploration gameplay, amazing graphics and exclusive soundtrack created by electronic music icon UNKLE. Join cute game mascot, Shiny, on his way to saving the universe from the impending darkness. Play through a variety of exciting game worlds and dozens of levels with mind-blowing puzzles. Your goal is to explore game levels, seek for game objects that reflect, split, combine, paint, bend and teleport rays of light energy to activate the Sources of Life and bring light back to the universe." Mastering the various reflection items in God of Light is very easy to learn and new elements are introduced during the game. Amazing puzzle game Here's the initial review I posted on the Play Store: "Great change in puzzles Fantastic and refreshing concept of puzzle solving. The effects and the music match very well, putting the player in the right mood to game. Get enlightened and grow your skills until you are a true God of Light." And it remains true, even after completing the first realm completely. Similar to Quell it took me only a couple of hours during the evening to complete all levels in the available three realms, unfortunately. God of Light currently consists of 75 levels, well it's 25 in each realm to be precise, and the challenges are increasing. Compared to the iOS version from the AppStore, God of Light is available for free on Android - at least the first realm (25 levels). Unlocking the other two remaining realms is done through an in-app purchase. The visual appearance, the sound effects and the background music provided by UNKLE makes God of Light a superb package for any puzzle gamer. Whether it is simply reflecting light over multiple mirrors, or later on bending the rays of light with black holes, or using prisms to either split, enforce, or colourise your beam, God of Light is great fun and offers a good amount of joy. Check out the following screenshots for some impressions. God of Light: Astonishing graphics and visual appeal throughout the game God of Light - Introduction to the game during the first levels. New light items are introduced at each stage during the game play God of Light: Increasing complexity and puzzle fun Hopefully, Playmous is going to provide more astonishing looking realms and interesting gimmicks in future versions. Play Store: God of Light Also, check out the latest game updates on the official web site of Playmous

    Read the article

  • Book review: Peopleware: Productive Projects and Teams

    - by DigiMortal
       Peopleware by Tom DeMarco and Timothy Lister is golden classic book that can be considered as mandatory reading for software project managers, team leads, higher level management and board members of software companies. If you make decisions about people then you cannot miss this book. If you are already good on managing developers then this book can make you even better – you will learn new stuff about successful development teams for sure. Why peopleware? Peopleware gives you very good hints about how to build up working environment for project teams where people can really do their work. Book also covers team building topics that are also important reading. As software developer I found practically all points in this book to be accurate and valid. Many times I have found my self thinking about same things and Peopleware made me more confident about my opinions. Peopleware covers also time management and planning topics that help you do way better job on using developers time effectively by minimizing the amount of interruptions by phone calls, pointless meetings and i-want-to-know-what-are-you-doing-right-now questions by managers who doesn’t write code anyway. I think if you follow suggestions given by Peopleware your developers are very happy. I suggest you to also read another great book – Death March by Edward Yourdon. Death March describes you effectively what happens when good advices given by Peopleware are totally ignored or worse yet – people are treated exactly opposite way. I consider also Death March as golden classics and I strongly recommend you to read this book too. Table of Contents Acknowledgments Preface to the Second Edition Preface to the First Edition Part 1: Managing the Human Resource Chapter 1: Somewhere Today, a Project Is Failing Chapter 2: Make a Cheeseburger, Sell a Cheeseburger Chapter 3: Vienna Waits for You Chapter 4: Quality-If Time Permits Chapter 5: Parkinson's Law Revisited Chapter 6: Laetrile Part II: The Office Environment Chapter 7: The Furniture Police Chapter 8: "You Never Get Anything Done Around Here Between 9 and 5" Chapter 9: Saving Money on Space Intermezzo: Productivity Measurement and Unidentified Flying Objects Chapter 10: Brain Time Versus Body Time Chapter 11: The Telephone Chapter 12: Bring Back the Door Chapter 13: Taking Umbrella Steps Part III: The Right People Chapter 14: The Hornblower Factor Chapter 15: Hiring a Juggler Chapter 16: Happy to Be Here Chapter 17: The Self-Healing System Part IV: Growing Productive Teams Chapter 18: The Whole Is Greater Than the Sum of the Parts Chapter 19: The Black Team Chapter 20: Teamicide Chapter 21: A Spaghetti Dinner Chapter 22: Open Kimono Chapter 23: Chemistry for Team Formation Part V: It't Supposed to Be Fun to Work Here Chapter 24: Chaos and Order Chapter 25: Free Electrons Chapter 26: Holgar Dansk Part VI: Son of Peopleware Chapter 27: Teamicide, Revisited Chapter 28: Competition Chapter 29: Process Improvement Programs Chapter 30: Making Change Possible Chapter 31: Human Capital Chapter 32:Organizational Learning Chapter 33: The Ultimate Management Sin Is Chapter 34: The Making of Community Notes Bibliography Index About the Authors

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • How to debug lag using Bluetooth connected mouse and A2DP headset?

    - by gertvdijk
    I own a Logitech M555b mouse (since a week) for use with my HP Elitebook 8570w laptop running Kubuntu 12.04. Works fine right after connecting using the KDE Bluetooth control module. However, after some time (seemingly random), it starts to lag. Movements are being delayed for roughly 500ms for a short period of time. Usually it recovers after some time too, but it can take minutes. All actions are being delayed: movements, click, scrolls. Additionally, the movements can be choppy during these times. A workaround that always works for the same short period of time is to disconnect an re-connect the mouse. This can be done using the same KDE Bluetooth control module. What did I try already? Running this at boot time: echo on > `readlink -f /sys/class/bluetooth/hci0`/../../../power/level To disable any power saving features on the Bluetooth hci0 device. Check the mouse's batteries (it's just a week old, other new batteries: same result) Checking logs and kernel messages about Bluetooth-related entries: none aside the expected messages on connect time. I'm running kernel 3.5.0-13-generic as provided in the xorg-edgers PPA. Booting the regular 3.2 Precise kernel results in the same behaviour. Some other information that may help: It happens when no other Bluetooth connections are active on the machine. Similar symptoms also occur on my Bluetooth stereo (A2DP) headset, but then it's audio lagging and skipping. Swapping Bluetooth profiles as described here then helps. Conclusion: it's not the mouse that's faulty. The headset always worked fine using my now dead Thinkpad T61p with built-in Bluetooth. The bluetooth module in my laptop is connected via USB and shows up as Bus 002 Device 003: ID 0a5c:21e1 Broadcom Corp. I'm mobile and several people around me are using Bluetooth at work (A2DP mostly). It also occurs at home, where my neighbours are probably using Bluetooth as well. It could just be radio interference, but I think Bluetooth connections should just hop to another channel. And, moreover, it just works properly instantly when re-connecting. Therefore I think it's a software driver issue and I'd like to debug it. Is there any way to get more verbose logging on the Bluetooth(-hid) modules?

    Read the article

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Visual Studio 2010 Launch Events

    - by Jim Duffy
    Don’t miss out on the opportunity to learn about the new features in Visual Studio 2010. Check out the MSDN Events page and find out when the talented folks of the Developer & Evangelism group will be visiting your city to prove to you that /*Life Runs On Code*/. I’ll be attending the Raleigh event June 2, 2010 from 1:00 - 5:00 PM. North Carolina State University, Jane S. McKimmon Conference Center 1101 Gorman St Raleigh North Carolina 27606 United States From the Raleigh Event page: Event Overview Learn about the rich application platforms that Microsoft® Visual Studio® 2010 supports, including Windows® 7, the Web, SharePoint®, Windows Azure™, SQL®, and Windows® Phone 7 Series. From tighter tester and dev collaboration to new ALM tools, there’s a lot that’s new. Here’s what you can expect: Windows Development with Visual Studio 2010 Visual Studio has always been the best way to build compelling visual solutions for Windows. Visual Studio 2010 continues this trend with great new tooling support for Silverlight 4, WPF, and native development. In this demo heavy session, you’ll see how you can build rich Windows applications with Silverlight 4 using new trusted application features including out-of-browser execution, saving to the file system, and even COM Automation. You’ll also see how you can use the new Task Parallel Library from within a WPF application to take advantage of all those cores in today’s modern computers. Web and Cloud Development with Visual Studio 2010 If you build solutions for the web, then this session is for you. Come see how your existing skills move forward with Visual Studio 2010 both for in-house ASP.NET development and the new frontier of the Cloud. In this session, you’ll see improved designers, new HTML and JavaScript snippets, Web Forms enhancements, and how you can quickly build great web sites using Dynamic Data. You’ll see the changes made to testable web sites with MVC 2.0 and how we’ve integrated JQuery support into the platform. You’ll then see how easy it is to leverage your existing code and move to the cloud with Windows Azure. Windows Phone 7 Developer Tools and Platform Overview This session provides an overview of Visual Studio® 2010 for Windows Phone. Learn about the powerful capabilities of this new application platform and the developer tools experience including basic IDE usage, debugging, packaging, and deployment. This session also shows how you can use Microsoft Expression® Blend™ for Windows Phone to build great Silverlight applications. Have a day. :-|

    Read the article

  • Walmart's Mobile Self-Checkout

    - by David Dorf
    Reuters recently reported that Walmart was testing an iPhone-based self-checkout at a store near its headquarters.  Consumers scan items as they're placed in the physical basket, then the virtual basket is transferred to an existing self-checkout station where payment is tendered.  A very solid solution, but not exactly original. Before we go further, let's look at the possible cost savings for Walmart.  According to the article: Pushing more shoppers to scan their own items and make payments without the help of a cashier could save Wal-Mart millions of dollars, Chief Financial Officer Charles Holley said on March 7. The company spends about $12 million in cashier wages every second at its Walmart U.S. stores. Um, yeah. Using back-of-the-napkin math, I calculated Walmart's cashiers are making $157k per hour.  A more accurate statement would be saving $12M per year for each second saved on the average transaction time.  So if this self-checkout approach saves 2 seconds per transaction on average, Walmart would save $24M per year on labor.  Maybe.  Sometimes that savings will be used to do other tasks in the store, so it may not directly translate to less employees. When I saw this approach demonstrated in Sweden, there were a few differences, which may or may not be in Walmart's plans.  First, the consumers were identified based on their loyalty card.  In order to offset the inevitable shrink, retailers need to save on labor but also increase basket size, typically via in-aisle promotions.  As they scan items, retailers should target promos, and that's easier to do if you know some shopping history.  Last I checked, Walmart had no loyalty program. Second, at the self-checkout station consumers were randomly selected for an audit in which they must re-scan all the items just like you do at a typical self-checkout.  If you were found to be stealing, your ability to use the system can be revoked.  That's a tough one in the US, especially when the system goes wrong, either by mistake or by lying.  At least in my view, the Swedes are bit more trustworthy than the people of Walmart. So while I think the idea of mobile self-checkout has merit, perhaps its not right for Walmart.

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • WebCenter Customer Spotlight: Guizhou Power Grid Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGuizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. The business objectives were to consolidate information contained in disparate systems into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Guizhou Power Grid Company saved more than US$693,000 in storage costs, reduced  average search times from 180 seconds to 5 seconds and solved 80% to 90% of technology and maintenance issues by searching the Oracle WebCenter Content management system. Company OverviewA wholly owned subsidiary of China Southern Power Grid Company Limited, Guizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. Business ChallengesThe business objectives were to consolidate information contained in disparate systems, such as the customer relationship management and power grid management systems, into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Solution DeployedGuizhou Power Grid Company  implemented Oracle WebCenter Content to build a content management system that enabled the secure, integrated management and storage of information, such as documents, records, images, Web content, and digital assets. The content management solution was integrated with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site. Business Results Saved more than US$693,000 in storage costs and shortened the material distribution time by integrating the knowledge management solution with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site Enabled staff to search 31,650 documents using catalogs, multidimensional attributes, and knowledge maps, reducing average search times from 180 seconds to 5 seconds and saving approximately 1,539 hours in annual search time Gained comprehensive document management, format transformation, security, and auditing capabilities Enabled users to upload new documents and supervisors to check the accuracy of these documents online, resulting in improved information quality control Solved 80% to 90% of technology and maintenance issues by searching the Oracle content management system for information, ensuring IT staff can respond quickly to users’ technical problems Improved security by using role-based access controls to restrict access to confidential documents and information Supported the efficient classification of corporate knowledge by using Oracle’s metadata functions to collect, tag, and archive documents, images, Web content, and digital assets “We chose Oracle WebCenter Content, as it is an outstanding integrated content management platform. It has allowed us to establish a system to access, query, share, manage, and store our corporate assets. This has laid a solid foundation for Guizhou Power Grid Company to improve management practices.” Luo Sixi, Senior Information Consultant, Guizhou Power Grid Company Additional Information Guizhou Power Grid Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Should I install ubuntu on USB instead of HDD dual-boot?

    - by user2147243
    I had Ubuntu 12.04 installed as dual-boot OS on top of Vista on my laptop. Hacked the grub settings to default to Vista (instead of the default Ubuntu -- pain) on startup, and all was OK for occasional Ubuntu use for past 6 months. Then last week I got a strange message about 'lack of disk space' (~50MB free) when installing pxyplot, even though there was still about 6GB free disk space when I checked later. Then today the Ubuntu wouldn't load at all, and checking the HDD partitions in Vista it looked like the 15GB Ubuntu partition was now three smaller partitions! So, I got rid of those partitions and expanded the Vista partition to use the reclaimed space. Now can't restart ('grub rescue' appears and doesn't 'rescue' anything), so I'll have to do a boot recovery using a Vista installation CD. (Not a particularly user-friendly failure mode of the dual-boot installation!) I now have to decide to either a) try installing ubuntu on the HDD again, but don't want to stuff up my Vista ever again, as that is my most used OS, or b) install Ubuntu on a 16GB USB 3.0 stick. Apparently performance from USB won't be as good as from HDD, and running OS from USB stick does lots of r/w so the stick may fail after a few years! Perhaps installing Ubuntu on live USB and setup to then run in RAM would alleviate the performance/USB lifespan problems? If I create a live-USB for Ubuntu OS, will it boot off that when I restart the laptop with it plugged in? Or will I have to change the laptop setting for boot-order whenever I want to boot Ubuntu instead of Vista (that would be even more painful than the grub default boot order putting Ubuntu ahead of the existing Vista OS!) -- update: I recovered my Vista setup using Iolo SystemMechanic Disaster Recovery Tool, and created a bootable USB of Ubuntu 13.10 on an 8GB USB3.0 pendrive, with 4GB of 'persistence' to allow saving of settings, install some packages etc. It worked OK for a couple of test boots, but once I changed the time and desktop wallpaper, the next Ubuntu reboot crashed and I then couldn't get it to boot successfully. So I decided to install Ubuntu 12.04 LTS as a dual-boot again, but this time instead of partitioning the HDD and installing from an ISO DVD I used the wubi.exe tool to install Ubuntu as a dual-boot. Worked very well, although one oddity was that, despite asking how big the make the partition (20GB), the installed Ubuntu appears to be happily installed somewhere within the Vista NTFS file system (no partition shows up in Windows disk manager, and in Ubuntu disk management tool the entire 133 GB of HDD is showing, with ~40GB free space). A nice feature of installing the dual-boot using wubi is that the laptop now uses Windows boot manager on startup, with Vista as the default OS and Ubuntu happily listed as second on the list. So far so good.

    Read the article

  • Ubuntu 11.10 crashes all browsers often

    - by murat
    I have been using ubuntu 11.10 for 1 year,today it made me surprised. When I open google chrome it just closes itself. Firstly i thought that it is just for chrome and tried Firefox : It also closes itself and one thing more : I tried desktop programs such as image viewer it also closes itself.I restarted it but there is no changing.What can do this ? is it virus or another system problem ? I did not have any problem like this until today.. $ google-chrome (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", --2012-04-04 19:33:14-- https://clients2.google.com/cr/report Resolving clients2.google.com... 173.194.70.100, 173.194.70.101, 173.194.70.102, ... Connecting to clients2.google.com|173.194.70.100|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] 2012-04-04 19:33:19 (888 KB/s) - `/dev/fd/3' saved [16] (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", Moonlight: 3.99.0.3 Moonlight: Attempting to load libmoonloaderxpi (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory Segmentation fault After trying to install libvdpau1 error is changed : google-chrome --2012-04-04 20:05:03-- https://clients2.google.com/cr/report Resolving clients2.google.com... 173.194.70.113, 173.194.70.138, 173.194.70.139, ... Connecting to clients2.google.com|173.194.70.113|:443... connected. HTTP request sent, awaiting response... Moonlight: 3.99.0.3 Moonlight: Attempting to load libmoonloaderxpi 200 OK Length: unspecified [text/html] Saving to: `/dev/fd/3' [<=> ] 0 --.-K/s f4c55117d1b4656e [ <= ] 16 --.-K/s in 0s 2012-04-04 20:05:12 (337 KB/s) - `/dev/fd/3' saved [16] Segmentation fault

    Read the article

  • Big-name School for Undergrad Students

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • ssao implementation

    - by Irbis
    I try to implement a ssao based on this tutorial: link I use a deferred rendering and world coordinates for shading calculations. When saving gbuffer a vertex shader output looks like this: worldPosition = vec3(ModelMatrix * vec4(inPosition, 1.0)); normal = normalize(normalModelMatrix * inNormal); gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0); Next for a ssao calculations I render a scene as a full screen quad and I save an occlusion parameter in a texture. (Vertex positions in the world space: link Normals in the world space: link) SSAO implementation: subroutine (RenderPassType) void ssao() { vec2 texCoord = CalcTexCoord(); vec3 worldPos = texture(texture0, texCoord).xyz; vec3 normal = normalize(texture(texture1, texCoord).xyz); vec2 noiseScale = vec2(screenSize.x / 4, screenSize.y / 4); vec3 rvec = texture(texture2, texCoord * noiseScale).xyz; vec3 tangent = normalize(rvec - normal * dot(rvec, normal)); vec3 bitangent = cross(normal, tangent); mat3 tbn = mat3(tangent, bitangent, normal); float occlusion = 0.0; float radius = 4.0; for (int i = 0; i < kernelSize; ++i) { vec3 pix = tbn * kernel[i]; pix = pix * radius + worldPos; vec4 offset = vec4(pix, 1.0); offset = ProjectionMatrix * ViewMatrix * offset; offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; float sample_depth = texture(texture0, offset.xy).z; float range_check = abs(worldPos.z - sample_depth) < radius ? 1.0 : 0.0; occlusion += (sample_depth <= pix.z ? 1.0 : 0.0); } outputColor = vec4(occlusion, occlusion, occlusion, 1); } That code gives following results: camera looking towards -z world space: link camera looking towards +z world space: link I wonder if it is possible to use world coordinates in the above code ? When I move camera I get different results because world space positions don't change. Can I treat worldPos.z as a linear depth ? What should I change to get a correct results ? I except the white areas in place of occlusion, so the ground should has the white areas only near to the object.

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • Best way to remote restart Ubuntu from Windows machine

    - by robsoft
    Background: I'm looking to put a series of Ubuntu machines into retail locations, they're being used as dumb kiosks to show a series of slides onto large LCD panel TV screens. Once installed, they won't have a keyboard or mouse connected but will have a fixed IP on the local network. Everything is configured to auto-start, no automatic updates, no power saving etc - I think we're pretty-much good to go apart from one thing. I need the retail staff to be able to restart the boxes if a problem arises. We have VNC running (now that we've turned off desktop enhancements!) so that we can remotely get into the machines if we need to, but that's not something we would allow the retail staff to do. The machines are going to be physically 'out of the way' (probably in the ceiling space) so the power button is not easily accessible!. I'd like to have some means of allowing the retail staff to restart the Ubuntu machine, from the desktop of one of their Windows terminals. I don't really want to give them some kind of raw terminal access (the command line will frighten them!) and I don't want them to use VNC (as stated above). Ideally there would be an icon on the Windows desktop, they double-click it, reply to a simple 'are you sure?' prompt, and then the Ubuntu box is told to restart. The Windows side of that won't be a problem, we can write something using Delphi, Python & Qt4, whatever - it's the Ubuntu side of it I'm stuck with. Out of sight/view, could I have a Windows program open a terminal across the network and tell Ubuntu to restart? Is this what SSH could be used for (I have never set that kind of thing up). The Windows programming side isn't really an issue, it's just that I'm a total Ubuntu noob and don't know where to start from the platform point of view. The other thing we considered is also having the machine automatically restart itself at a set time each day (obviously out of store hours!). To me, that seems a bit unnecessary (though forcing a restart once a week/month might be worthwhile). Any thoughts or suggestions? Being able to restart the box on demand across the network is my prime requirement.

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • [EF + Oracle] Inserting Data (1/2)

    - by JTorrecilla
    Prologue Following EF series (I ,II y III) in this chapter we will see how to create DB record from EF. Inserting Data Like we indicated in the 2º post: “One Entity matches with a DB record, and one property match with a Table Column”. To start, we need to create an object from one of the Entities: 1: EMPLEADOS empleado = new EMPLEADOS(); Also like, I told previously, Exists the possibility to use the Static Function defined by VS for each Entity: Once we have created the object, we can Access to it properties to fill like a common class:   1: empleado.NOMBRE = "Javier Torrecilla";   After finish of fill our Entity properties, it must be needed to add the object to the appropriate ObjectSet in the ObjectContext: 1: enti.EMPLEADOS.AddObject(empleado); or 1: enti.AddToEMPLEADOS(empleado); Both methods will do the same action, create an insert statement. Have we finished? No. Any Entity has a property called “EntityState”. This prop is an Enum from “EntityState”, which has the following: Detached: the Entity is created, but not added to the Context. Unchanged: There is no pending changes in the Entity. Added: The entity is added to the ObjectSet, but it is not yet sent to the DB. Deleted: The object is deleted form the ObjectSet, but not yet from the DB. Modified: There is Pending Changes to confirm. Let’s see, the several values of the property during the Creation steps: 1. While the Object is created and we are filling the props: EntityState.Detached; 2. After adding to the ObjectSet: EntityState.Added. This not indicated that the record is in the DB 3. Saving the Data: To sabe the data in the DB, we are going to call “SaveChanges” method of the Object Context. After invoke it, the property will be EntityState.Unchanged.   What does SaveChanges Method? This function will synchronize and send all pending changes to DB. It will add, modify or delete all Entities, whose EntityState property, is setted to Added, Deleted or Modified. After finishing, all added or modified entities will be change the State to “Unchanged”, and deleted Entities must take the “Detached” state.

    Read the article

  • How to deal with fellow programmer who likes to delegate task with lack any support from boss [closed]

    - by Rudy
    I have a problem with my fellow programmer. We are currently working together in a small project that need to be shipped every 2 weeks. She has a tendency to ask for help for every issues that she is facing. Whether it's a compile error, algorithm problem or even sync/merge issue that caused by herself. She does not even bother to check Google or try to find out by herself. I can be asked to help her for 5-10 times a day. Everyday her husband keeps calling (4-6 times a day), and most of the code that has been delivered by her are actually incorrect. Today she framed me for sending the wrong delivery product. She went home after lunch on the delivery day without telling PM and other team member on that day and her code she commited does not work at all. It's not even tested. I have no choice to roll back her code and cleaning her code just for sake to able to run the product. I have warned her about her defective codes for almost 3 iterations. She said when she was not around I should be able to test her module for her. I snapped and yelled that I am not her slave and directly reported to my boss. However, my boss is not a person that can manage and care about software quality. What is the most important thing to my boss is delivery of product, whether it tested or not. He can even asked us to deliver something that not even tested by QA to the client, on the next day. Most of our suggestion is not followed by him. He even asked me to apologize to her because I snapped. I am tired of the whole situation. This kind of thing keeps repeated. I do have saving to be able to survive for 6 months and the idea of resigning is keep haunting. There is nothing else that can be learned in my current job and I had been in a better environment than this. What should I do with the situation?

    Read the article

  • Validation and Error Generation when using the Data Mapper Pattern

    - by AndyPerlitch
    I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this: (1) The data mapper is used to get current info (assoc array) about the object in db: +=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+ mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24', ); (2) the Person object constructor takes this array as an argument, populating its fields. class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); } } (3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters <form> <input type="text" name="name" value="<?=$person->getName()?>" /> <input type="text" name="favorite_color" value="<?=$person->getFavColor()?>" /> <input type="text" name="age" value="<?=$person->getAge()?>" /> </form> (3b) (POST request) Person object is altered with the POST data using Person::setters $person->setName($_POST['name']); $person->setFavColor($_POST['favorite_color']); $person->setAge($_POST['age']); *(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database): $person_mapper->savePerson($person); // the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapper Any guidance, suggestions, criticism, or name-calling would be greatly appreciated.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >