Search Results

Search found 5887 results on 236 pages for 'perform'.

Page 103/236 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Opportunities in Development in our Swedish office

    - by anca.rosu
    Hi everyone, my name is Henrik and I joined the JRockit group in 2004. Before that my background was Microsoft, as both a Test Competence lead and as a Program Manager. As an Engineering Manager at Oracle I lead a team of 11 developers. I focus on people management and the daily operations of the department with a heavy focus on interaction and dependencies between the groups and departments here at the Stockholm development site. I also make sure my team deliver on our commitments. I would like to give you a brief summary of the Oracle JRockit team: -The development group in Stockholm delivers several products for the Oracle Fusion Middleware stack. Our main products are JRockitVE which allows you to run a Java Virtual Machine without an operating system, the JRockit Java Virtual Machine which is the default jvm for all Oracle middleware products, and the JRockit MissionControl, a set of tools that allows developers to monitor their applications at runtime and perform advanced latency analysis as well as in-production memory leak detection etc. -The office has several departments focusing on different aspects of the product development process, not only to build features and test them but everything from building the infrastructure needed to automatically build and test the products to sustaining engineering that tracks down bugs in customer systems and provide them with patches. Some inspirational lines around what the Oracle JRockit group can offer you in terms of progress, development and learning: - It is a unique chance to get insight and experience building enterprise class software for one of the worlds largest software companies. Here there are almost unlimited possibilities for the right candidate to learn about silicon features and how to implement support for this in software, and to compile optimizations. The position will also give insight into the processes needed to produce software at this level in the industry. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: Development,Sweden,Jrockit,Java,Virtual Machine,Oracle Fusion Middleware,software

    Read the article

  • Installer doesn't display partition I want to install to

    - by Aditya
    While performing a Ubuntu 10.10 installation on my laptop, it doesn't show partitions pertaining to the PC. My PC configuration is as follows : HP Pavilion dv6 - 2020AX AMD Turion II Dual Core Mobile Processor M500 4 GB RAM OS Installed : Windows 7 500 GB Hard drive partitioned as follows : C : 227 GB (Free : 142 GB) D : 11.9 GB (Free : 1.98 GB) - Recovery F : 174 GB (Free : 18 GB) G : 50.5 GB (Free : 50.4 GB) So, I want to perform a Dual-boot installation on my PC, so that Ubuntu resides in the free disk space G:. Therefore, I started the Ubuntu 10.10 installation and select the manual partitioning feature in the installation. However, in the 'Allocate Drive Space' section of the installation, following partitions information is displayed: Partition Type Size Used /dev/sda /dev/sda1      1 MB    unknown /dev/sda2    ntfs    208 MB   unknown /dev/sda3   ntfs    244813 MB    168540 MB /dev/sda4    ntfs    255083 MB   3221 MB where /dev/sda - 500 GB So, what exactly is the problem? What is it should I do to install Ubuntu 10.10 in the G: disk space? Why are the partitions not being shown as the way they should be? Any Suggestions. Thank you for the help.

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Sams Teach Yourself Windows Phone 7 Application Development in 24 Hours

    - by Nikita Polyakov
    I am extremely proud to announce that book I helped author is now out and available nationwide and online! Sams Teach Yourself Windows Phone 7 Application Development in 24 Hours It’s been a a great journey and I am honored to have worked with Scott Dorman, Joe Healy and Kevin Wolf on this title. Also worth mentioning the great work that editors from Sams and our technical reviewer Richard Bailey have put into this book! Thank you to everyone for support and encouragement! You can pick up the book from: http://www.informit.com/store/product.aspx?isbn=0672335395 http://www.amazon.com/Teach-Yourself-Windows-Application-Development/dp/0672335395  Here is the cover to look for in the stores: Description: Covers Windows Phone 7.5 In just 24 sessions of one hour or less, you’ll learn how to develop mobile applications for Windows Phone 7! Using this book’s straightforward, step-by-step approach, you’ll learn the fundamentals of Windows Phone 7 app development, how to leverage Silverlight or the XNA Framework, and how to get your apps into the Windows Marketplace. One step at a time, you’ll master new features ranging from the new sensors to using launchers and choosers. Each lesson builds on what you’ve already learned, helping you get the job done fast—and get it done right! Step-by-step instructions carefully walk you through the most common Windows Phone 7 app development tasks. Quizzes and exercises at the end of each chapter help you test your knowledge. By the Way notes present interesting information related to the discussion. Did You Know? tips offer advice or show you easier ways to perform tasks. Watch Out! cautions alert you to possible problems and give you advice on how to avoid them. Learn how to... Choose an application framework Use the sensors Develop touch-friendly apps Utilize push notifications Consume web data services Integrate with Windows Phone hubs Use the Bing Map control Get better performance out of your apps Work with data Localize your apps Use launchers and choosers Market and sell your apps Thank you!

    Read the article

  • Creating a retro-style palette swapping effect in OpenGL

    - by Zack The Human
    I'm working on a Megaman-like game where I need to change the color of certain pixels at runtime. For reference: in Megaman when you change your selected weapon then main character's palette changes to reflect the selected weapon. Not all of the sprite's colors change, only certain ones do. This kind of effect was common and quite easy to do on the NES since the programmer had access to the palette and the logical mapping between pixels and palette indices. On modern hardware, though, this is a bit more challenging because the concept of palettes is not the same. All of my textures are 32-bit and do not use palettes. There are two ways I know of to achieve the effect I want, but I'm curious if there are better ways to achieve this effect easily. The two options I know of are: Use a shader and write some GLSL to perform the "palette swapping" behavior. If shaders are not available (say, because the graphics card doesn't support them) then it is possible to clone the "original" textures and generate different versions with the color changes pre-applied. Ideally I would like to use a shader since it seems straightforward and requires little additional work opposed to the duplicated-texture method. I worry that duplicating textures just to change a color in them is wasting VRAM -- should I not worry about that?

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • How can I justify software testing to management?

    - by Nate
    I work for a small company (less than 200 employees) whose software group only makes up a small part of our staff (4 employees, occasionally with a few contractors). The four of us have been making strides in transitioning to better practices, and one of the next logical steps is to improve our testing. As anyone who has done any meaningful tests knows, testing takes a lot of time - and at my company, it takes too much time to justify to management, so we generally do what little we do on the sly. I don't think this is serving us well, as we keep coming up against otherwise avoidable problems when we ship under-tested software. I would like to be able to come to management with a justification for hiring a dedicated software test engineer (someone who can both write automated tests and perform manual ones). Are there any good published studies that show the benefits of adding such a position to a small company? Where can I find information about costs associated with the position? I plan on doing a little number crunching on our own history, but having some external sources to point to would help bolster my case.

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • Organising data access for dependency injection

    - by IanAWP
    In our company we have a relatively long history of database backed applications, but have only just begun experimenting with dependency injection. I am looking for advice about how to convert our existing data access pattern into one more suited for dependency injection. Some specific questions: Do you create one access object per table (Given that a table represents an entity collection)? One interface per table? All of these would need the low level Data Access object to be injected, right? What about if there are dozens of tables, wouldn't that make the composition root into a nightmare? Would you instead have a single interface that defines things like GetCustomer(), GetOrder(), etc? If I took the example of EntityFramework, then I would have one Container that exposes an object for each table, but that container doesn't conform to any interface itself, so doesn't seem like it's compatible with DI. What we do now, in case it helps: The way we normally manage data access is through a generic data layer which exposes CRUD/Transaction capabilities and has provider specific subclasses which handle the creation of IDbConnection, IDbCommand, etc. Actual table access uses Table classes that perform the CRUD operations associated with a particular table and accept/return domain objects that the rest of the system deals with. These table classes expose only static methods, and utilise a static DataAccess singleton instantiated from a config file.

    Read the article

  • Dynamic endpoint binding in Oracle SOA Suite by Cattle Crew

    - by JuergenKress
    Why is dynamic endpoint binding needed? Sometimes a BPEL process instance has to determine at run-time which implementation of a web service interface is to be called. We’ll show you how to achieve that using dynamic endpoint binding. Let’s imagine the following scenario: we’re running a car rental agency called RYLC (Rent Your Legacy Car) which operates different locations. The process of renting a car is basically identical for all locations except for the determination which cars are currently available. This is depicted in the following diagram: There are three different implementations of the GetAvailableCars service. But how can we achieve calling them dynamically at run-time using Oracle SOA Suite? How to dynamically set the service endpoint There are just a couple of implementation steps we need to perform to enable dynamic endpoint binding: create a new SOA project in JDeveloper add a CarRental BPEL process add an external reference to the GetAvailableCars service within the composite create a DVM file containing the URI’s by which the services for the different locations can be accessed set the endpointURI property on the Invoke component calling the GetAvailableCars service (value is taken from the DVM file) Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Cattle crew,SOA binding,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Object behaviour or separate class?

    - by Andrew Stephens
    When it comes to OO database access you see two common approaches - the first is to provide a class (say "Customer") with methods such as Retrieve(), Update(), Delete(), etc. The other is to keep the Customer class fairly lightweight (essentially just properties) and perform the database access elsewhere, e.g. using a repository. This choice of approaches doesn't just apply to database access, it can crop up in many different OOD scenarios. So I was wondering if one way is preferable over the other (although I suspect the answer will be "it depends")! Another dev on our team argues that to be truly OO the class should be "self-contained", i.e. providing all the methods necessary to manipulate and interact with that object. I personally prefer the repository approach - I don't like bloating the Customer class with all that functionality, and I feel it results in cleaner code having it elsewhere, but I can't help thinking I'm seriously violating core OO concepts! And what about memory implications? If I retrieve thousands of Customer objects I'm assuming those with the data access methods will take up a lot more memory than the property-only objects?

    Read the article

  • Why isn't Java used for modern web application development?

    - by Cliff
    As a professional Java programmer, I've been trying to understand - why the hate toward Java for modern web applications? I've noticed a trend that out of modern day web startups, a relatively small percentage of them appears to be using Java (compared to Java's overall popularity). When I've asked a few about this, I've typically received a response like, "I hate Java with a passion." But no one really seems to be able to give a definitive answer. I've also heard this same web startup community refer negatively to Java developers - more or less implying that they are slow, not creative, old. As a result, I've spent time working to pick up Ruby/Rails, basically to find out what I'm missing. But I can't help thinking to myself, "I could do this much faster if I were using Java," primarily due to my relative experience levels. But also because I haven't seen anything critical "missing" from Java, preventing me from building the same application. Which brings me to my question(s): Why is Java not being used in modern web applications? Is it a weakness of the language? Is it an unfair stereotype of Java because it's been around so long (it's been unfairly associated with its older technologies, and doesn't receive recognition for its "modern" capabilities)? Is the negative stereotype of Java developers too strong? (Java is just no longer "cool") Are applications written in other languages really faster to build, easier to maintain, and do they perform better? Is Java only used by big companies who are too slow to adapt to a new language?

    Read the article

  • Use Oracle Product Hub Business Events to Integrate Additional Logic into Your Business Flows

    - by ToddAC-Oracle
    Business events provide a mechanism to plug-in and integrate some additional business processes or custom code into standard business flows.  You could send a notification to a business User, write to advanced queues or perform some custom processes. In-built business events are available specifically for each flow like Item Creation, Item Updation, User-Defined Attribute Changes, Change Order Creation, Change Order Status Changes and others.To get a list of business events, refer to the PIM implementation Guide or Using Business Events in PLM and PIM Data Librarian (Doc ID 372814.1) .If you are planning to use business events, Doc ID 1074754.1 walks you through a setup with examples. How to Subscribe and Use Product Hub (PIM / APC) Business Events [Video] ? (Doc ID 1074754.1). Review the 'Presentation' section of Doc ID 1074754.1 for complete information and best practices to follow while implementing code for subscriptions. Learn things you might want to avoid, like commit statements for instance. Doc ID 1074754.1 also provides sample code for testing, and can be used to troubleshoot missing setups or frequently experienced issues. Take advantage and run a test ahead of time with the sample code to isolate any issues from within business specific subscription code.Get more out of Oracle Product Hub by using Business Events!

    Read the article

  • How to use the Raring/Saucy netboot installer to install Precise?

    - by mikepurvis
    We have a Haswell motherboard with onboard ethernet controllers which are not supported in the Precise (3.2) kernel. However, we're using netboot installation, and we'd really like to stick with the LTS version. Once the Precise install is completed, we can install the linux-generic-lts-saucy package, which gets us the ethernet hardware support which is ultimately required. So, our options are: Plug in a USB-Ethernet (or even wifi) dongle, perform the install that way. Modify the Precise installer to somehow include the required driver (a udeb, or some early_command invocation?) Modify the Raring installer (3.8 kernel, which supports the device) to instead install Precise. If it's possible the third option seems like the simplest and most logical to me. Now, we are already using the precise-updates installer (Aug 2013), as opposed to the original April 2012 installer. However, the precise-updates installer still appears to use the 3.2 kernel. I'm already comfortable with preseeding and modifying the netboot initrd. So my question is, can I somehow modify the Raring/Saucy netboot initrd to instead install Precise? Thanks.

    Read the article

  • Is your TRY worth catching?

    - by Maria Zakourdaev
      A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error. An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013: Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only: To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table.  After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not. Yours, Maria

    Read the article

  • Discount Multilingual Day in the Life of User Experience

    - by ultan o'broin
    Super article by the WikiMedia Foundation engineering folks about Designing for the Multilingual Web using the Wikipedia Universal Language Selector user interface as an example. Great ideas about tools that are available, as well as covering the basics of wireframing (mockups), prototyping, and user testing. Lots of inspiration there for developers and builders of apps who want to ensure their user experience (UX) really delivers for a global audience. Check out the use of the Firefox-based Pencil, how to translate your mockups, and how to perform remote user testing using Google+ Hangouts. Paul Giner demonstrates how to translate mockups. A little clunky and homespun in parts (I would prefer if tools such as Pencil or Balsamiq MockUps, and so on, could roundtrip directly from SVG to XLIFF for example, and Pencil doesn't work yet with the latest versions for Firefox) and I am not sure how it can really scales to enterprise-level use. However, the UX methodology is basically sound, and reinforces the importance of designing and testing in more that one language. The most powerful message for me is that you do not need special resources, training or expensive tools to deliver great-looking usable apps if you're a developer. Definitely worth considering if you're building apps out there in the community.

    Read the article

  • Plugged in Not Charging.

    - by Eric Johnson
    Suggested steps to fix the nasty Windows power management issue of plugged in not charging. Option 1: Disconnect AC Shutdown Remove battery Connect AC Startup Under the Batteries category, right-click all of the Microsoft ACPI Compliant Control Method Battery listings, and select Uninstall (it’s ok if you only have 1). Shutdown Disconnect AC Insert battery Connect AC Startup Option 2: Turn off laptop. Unplug AC power. Remove battery. Replace AC power. Turn on laptop, allow OS to boot. Once logged in to the machine, perform a normal shut down. Unplug AC power. Replace battery. Replace AC power. Turn on laptop, allow OS to boot. The battery should once again be charging as normal Additional troubleshooting techniques: Check battery charging status in the BIOS Update BIOS Replace Battery (I did this and the new battery is not charging) See if the battery charging light works when the laptop is powered down. Supporting Links: http://jeffreypalermo.com/blog/plugged-in-not-charging-windows-7-solution/ http://social.technet.microsoft.com/Forums/en/itprovistahardware/thread/741398c6-a733-482c-a33c-2b61d9bc2984 http://www.youtube.com/watch?v=6Xf-ipP0wSY&feature=fvw

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • The Exceptional EXCEPT clause

    - by steveh99999
    Ok, I exaggerate, but it can be useful… I came across some ‘poorly-written’ stored procedures on a SQL server recently, that were using sp_xml_preparedocument. Unfortunately these procs were  not properly removing the memory allocated to XML structures – ie they were not subsequently calling sp_xml_removedocument… I needed a quick way of identifying on the server how many stored procedures this affected.. Here’s what I used.. EXEC sp_msforeachdb 'USE ? SELECT DB_NAME(),OBJECT_NAME(s1.id) FROM syscomments s1 WHERE [text] LIKE ''%sp_xml_preparedocument%'' EXCEPT SELECT DB_NAME(),OBJECT_NAME(s2.id) FROM syscomments s2 WHERE [text] LIKE ''%sp_xml_removedocument%'' ‘ There’s three nice features about the code above… 1. It uses sp_msforeachdb. There’s a nice blog on this statement here 2. It uses the EXCEPT clause.  So in the above query I get all the procedures which include the sp_xml_preparedocument string, but by using the EXCEPT clause I remove all the procedures which contain sp_xml_removedocument.  Read more about EXCEPT here 3. It can be used to quickly identify incorrect usage of sp_xml_preparedocument. Read more about this here The above query isn’t perfect – I’m not properly parsing the SQL text to ignore comments for example - but for the quick analysis I needed to perform, it was just the job…

    Read the article

  • lirc_zilog IR transmission no longer working with HD-PVR on 12.04

    - by johnf
    I have been running a ubuntu 10.04 with a patched version of lirc_zilog for two years. I upgraded to 12.04 and lirc_zilog is no longer working with my HD-PVR. The MythTV wiki reports that it did work out of the box with 11.04. The error message I get on irsend is as follows: johnf@carbon:~$ /usr/local/bin/irsend SEND_ONCE blaster 0_130_KEY_POWER irsend: command failed: SEND_ONCE blaster 0_130_KEY_POWER irsend: hardware does not support sending The lircd daemon, run interactively, reports the following: lircd: accepted new client on /var/run/lirc/lircd lircd: could not get hardware features lircd: this device driver does not support the LIRC ioctl interface lircd: major number of /dev/lirc0 is 250 lircd: LIRC major number is 61 lircd: check if /dev/lirc0 is a LIRC device lircd: WARNING: Failed to initialize hardware lircd: error processing command: SEND_ONCE blaster 0_130_KEY_POWER lircd: hardware does not support sending lircd: removed client Checking dmesg seems to indicate that the kernel module is loading properly: [56497.730743] lirc_zilog: module is from the staging directory, the quality is unknown, you have been warned. [56497.730999] lirc_zilog: Zilog/Hauppauge IR driver initializing [56497.732484] lirc_zilog: ir_probe: ir_rx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x71 [56497.732493] lirc_zilog: ir_probe: ir_tx_z8f0811_hdpvr on i2c-0 (Hauppage HD PVR I2C), client addr=0x70 [56497.732496] lirc_zilog: probing IR Tx on Hauppage HD PVR I2C (i2c-0) [56497.756822] lirc_zilog: firmware of size 302355 loaded [56497.756945] lirc_zilog: 743 IR blaster codesets loaded [56497.757030] i2c i2c-0: lirc_dev: driver lirc_zilog registered at minor = 0 [56497.757033] lirc_zilog: IR unit on Hauppage HD PVR I2C (i2c-0) registered as lirc0 and ready [56497.757035] lirc_zilog: probe of IR Tx on Hauppage HD PVR I2C (i2c-0) done [56497.757056] lirc_zilog: initialization complete Here is my /etc/lirc/hardware.conf #Chosen IR Transmitter TRANSMITTER="HD-PVR" TRANSMITTER_MODULES="lirc_dev lirc_zilog" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="/dev/lirc0" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" My lircd.conf is a copy of the recommended one. Examination of the kernel source seems to indicate that the lirc_zilog module should support transmission, it's newer than the patched version I was manually compiling on 10.04. I was previously using a manually built version of lirc 0.8.7 and not the packaged one. I'm now running the packaged version 9.0. I can provide any additional information required and will perform tests quickly. I'm very eager to get this issue resolved.

    Read the article

  • sp_help

    - by David-Betteridge
    One of the nice things about SQL Server Management Studio (SSMS) is that you can highlight a table name in a script and press Alt + F1 to perform sp_help on it. Unfortunately I've never been able to use that feature as the majority of the tables in our product belong to a schema other than dbo.    On a long train journey back to York I wondered if I could solve this problem by writing my own replacement for sp_help (which I’ve called sp_help_table_schemas).  My version works by first checking the system tables to find out which schemas the table belongs to SELECT s.Name   --Find the schema FROM sys.schemas s  JOIN sys.tables t on t.schema_id = s.schema_id  WHERE t.name = 'Orders'It then dynamically calls the standard sp_help method but this time supplying the table owner as well.SET @cmd = 'EXEC sp_help ''' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@ObjectName) + ''' ;' ;           EXEC ( @cmd )Once I had proved the basics worked I wrapped it up into a stored procedure and deployed it to the master database on my laptop.  It was then just a question of going into Tools à Options within SSMS and defining the keyboard short cutA couple of notes You can’t amend the existing Alt+F1 entry to I went with Ctrl+F1.  You need to open new query window for the change to be picked upSo I can now highlight a table name and press Ctrl+F1 The completed script is attached.   Thanks go to Martin Bell who reviewed my stored procedure and give some valuable advice.

    Read the article

  • MVC Validation with ModelState.isValid through a wizard

    - by Emmanuel TOPE
    I'm working on a small educational project on MVC 3, and I'm facing a small problem, when attempting to handle validation in my application through a wizard. I tried to get benefit from the ability of MVC3 to deliver content of a different view using the same URL, when handling an [HttpPost] method on a page. I my case,my main model's class contains about ten [Required] properties, that I would like to expose through a small wizard in 3 steps , So I want that the user may be able to enter his personal informations in the first step, then respond to some questions in the second stepp and finally receive a confirmation mail from the web application whit his credentials in the last step. I can't access the last step, because of the ModelState.isValid method that I use to handle validations, and which can't perform properly if I define some properties as [Required], but don't put them on the first view. As the replies to those questions remain in a couple of choices, I've thinked that I may use some nullable bool? for in order to avoid validation issues, but know that it's not the proper way. Are there someone who would like to help me find a way to extend my validation to those three steps ? Thanks in advance and sorry for my english, I'm not a native speaker.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >