Search Results

Search found 844 results on 34 pages for 'dirty'.

Page 28/34 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • what are good ways to implement search and search results using ajax?

    - by Amr ElGarhy
    i have some text box in a page and in the same page there will be a table 'grid' like for holding the search result. When the user start editing and of the textbox above, the search must start by sending all textboxs values to the server 'ajax', and get back with the results to fill the below grid. Notes: This grid should support paging, sorting by clicking on headers and it will contains some controls beside the results such as checkboxs for boolean values and links for opening details in another page. I know many ways to do this some of them are: 1- updatepanel around all of these controls and thats it "fast dirty solution" 2- send the search criteria using ajax request using JQuery post function for example and get back the JSON result, and using a template will draw the grid "clean but will take time to finish and will be harder to edit later". 3- .... My question is: What do you think will be the best choice to implement this scenario? because i face this scenario too much, and want to know which implementation will be better regarding performance, optimization, and time to finish. I just want to know your thoughts about this issue.

    Read the article

  • C++, inject additional data in a method

    - by justik
    I am adding the new modul in some large library. All methods here are implemented as static. Let mi briefly describe the simplified model: typedef std::vector<double> TData; double test ( const TData &arg ) { return arg ( 0 ) * sin ( arg ( 1 ) + ...;} double ( * p_test ) ( const TData> &arg) = &test; class A { public: static T f1 (TData &input) { .... //some computations B::f2 (p_test); } }; Inside f1() some computations are perfomed and a static method B::f2 is called. The f2 method is implemented by another author and represents some simulation algorithm (example here is siplified). class B { public: static double f2 (double ( * p_test ) ( const TData &arg ) ) { //difficult algorithm working p_test many times double res = p_test(arg); } }; The f2 method has a pointer to some weight function (here p_test). But in my case some additional parameters computed in f1 for test() methods are required double test ( const TData &arg, const TData &arg2, char *arg3.... ) { } How to inject these parameters into test() (and so to f2) to avoid changing the source code of the f2 methods (that is not trivial), redesign of the library and without dirty hacks :-) ? The most simple step is to override f2 static double f2 (double ( * p_test ) ( const TData &arg ), const TData &arg2, char *arg3.... ) But what to do later? Consider, that methods are static, so there will be problems with objects. Thanks for your help.

    Read the article

  • Simulate Golf Game Strategy

    - by Mitchel Sellers
    I am working on what at best could be considered a "primitive" golf game, where after a certain bit of randomness is introduced I need to play out a hole of golf. The hole is static and I am not concerned about the UI aspect as I just have to draw a line on a graphic of the hole after the ball has been hit showing where it traveled. I'm looking for input on thoughts of how to manage the "logic" side of the puzzle, below are some of my thoughts on the matter, input, suggestions, or references are greatly appreciated. Map out the hole into an array with a specific amount of precision, noting the type of surface: out of bounds, fairway, rough, green, sand, water, and most important the hole. Map out "regions" and if the ball is contained inside one of these regions setup parameters for "maximum" angle of departure. (For example the first part of the hole the shot must be between certain angles Using the current placement of the ball, and the region contained in #2, define a routine to randomly select the shooting angle, and power then move the ball, adjust the trajectory and move again. I know this isn't the "most elegant" solution, but in reality, we are looking for a quick and dirty solution, as I just have to do this a few times, set it and forget it afterward. From a languages perspective I'll be using ASP.NET and C# to get this done.

    Read the article

  • jquery, attaching objects (instead of string attribute) to an element

    - by binaryLV
    Hi! I'm trying to build DOM with jQuery and fill it with data that is received with AJAX (data type = json). I'd like to also store this data as an object, attached to a specific DOM element. Does jQuery provide any method for this? The reason I want to do it is because only part of data is initially displayed; other data might be needed later, depending on user actions. I tried using attr(), but it stores a string "[object Object]" instead of an actual object: var div = $('<div/>'); div.attr('foo', {bar: 'foobar'}); alert(div.attr('foo')); // gives "[object Object]" alert(typeof div.attr('foo')); // gives "string" alert(div.attr('foo').bar); // gives "undefined" Another way to do this would be by "bypassing" jQuery (div[0].foo = {bar: 'foobar'};), though this seems to be a "dirty workaround", if jQuery happens to already support attaching objects. Any ideas? Thanks in advance!

    Read the article

  • JavaOne Tutorial Report - JavaFX 2 – A Java Developer’s Guide

    - by Janice J. Heiss
    Oracle Java Technology Evangelist Stephen Chin and Independent Consultant Peter Pilgrim presented a tutorial session intended to help developers get a handle on JavaFX 2. Stephen Chin, a Java Champion, is co-author of the Pro JavaFX Platform 2, while Java Champion Peter Pilgrim is an independent consultant who works out of London.NightHacking with Stephen ChinBefore discussing the tutorial, a note about Chin’s “NightHacking Tour,” wherein from 10/29/12 to 11/11/12, he will be traveling across Europe via motorcycle stopping at JUGs and interviewing Java developers and offering live video streaming of the journey. As he says, “Along the way, I will visit user groups, interviewing interesting folks, and hack on open source projects. The last stop will be the Devoxx conference in Belgium.”It’s a dirty job but someone’s got to do it. His trip will take him from the UK through the Netherlands, Germany, Switzerland, Italy, France, and finally to Devoxx in Belgium. He has interviews lined up with Ben Evans, Trisha Gee, Stephen Coulebourne, Martijn Verburg, Simon Ritter, Bert Ertman, Tony Epple, Adam Bien, Michael Hutterman, Sven Reimers, Andres Almiray, Gerrit Grunewald, Bertrand Boetzmann, Luc Duponcheel, Stephen Janssen, Cheryl Miller, and Andrew Phillips. If you expect to be in Chin’s vicinity at the end of October and in early November, by all means get in touch with him at his site and add your perspective. The more the merrier! Taking the JavaFX PlungeNow to the business at hand. The “JavaFX 2 – A Java Developer’s Guide” tutorial introduced Java developers to the JavaFX 2 platform from the perspective of seasoned Java developers. It demonstrated the breadth of the JavaFX APIs through examples that are built out in the course of the session in an effort to present the basic requirements in using JavaFX to build rich internet applications. Chin began with a quote from Oracle’s Christopher Oliver, the creator of F3, the original version of JavaFX, on the importance of GUIs:“At the end of the day, on the one hand we have computer systems, and on the other, people. Connecting them together, and allowing people to interact with computer systems in a compelling way, requires graphical user interfaces.”Chin explained that JavaFX is about producing an immersive application experience that involves cross-platform animation, video and charting. It can integrate Java, JavaScript and HTML in the same application. The new graphics stack takes advantage of hardware acceleration for 2D and 3D applications. In addition, we can integrate Swing applications using JFXPanel.He reminded attendees that they were building JavaFX apps using pure Java APIs that included builders for declarative construction; in addition, alternative languages can be used for simpler UI creation. In addition, developers can call upon alternative languages such as GroovyFX, ScalaFX and Visage, if they want simpler UI creation. He presented the fundamentals of JavaFX 2.0: properties, lists and binding and then explored primitive, object and FX list collection properties. Properties in JavaFX are observable, lazy and type safe. He then provided an example of property declaration in code.  Pilgrim and Chin explained the architectural structure of JavaFX 2 and its basic properties:JavaFX 2.0 properties – Primitive, Object, and FX List Collection properties. * Primitive Properties* Object Properties* FX List Collection Properties* Properties are:– Observable– Lazy– Type SafeChin and Pilgrim then took attendees through several participatory demos and got deep into the weeds of the code for the two-hour session. At the end, everyone knew a lot more about the inner workings of JavaFX 2.0.

    Read the article

  • Software development stack 2012

    A couple of months ago, I posted on Google+ about my evaluation period for a new software development stack in general. "Analysing existing 'jungle' of multiple applications and tools in various languages for clarification and future design decisions. Great fun and lots of headaches... #DevelopersLife" Surprisingly, there was response... ;-) - And this series of articles is initiated by this post. Thanks Olaf. The past few years... Well, after all my first choice of software development in the past was Microsoft Visual FoxPro 6.0 - 9.0 in combination with Microsoft SQL Server 2000 - 2008 and Crystal Reports 9.x - XI. Honestly, it is my main working environment due to exisiting maintenance and support plans with my customers, but also for new project requests. And... hands on, it is still my first choice for data manipulation and migration options. But the earth is spinning, and as a software craftsman one has to be flexible with the choice of tools. In parallel to my knowledge and expertise in the above mentioned tools, I already started very early to get my hands dirty with the Microsoft .NET Framework. If I remember correctly, I started back in 2002/2003 with the first version ever. But this was more out of curiousity. During the years this kind of development got more serious and demanding, and I focused myself on interop and integrational libraries and applications. Mainly, to expose exisitng features of the .NET Framework to Visual FoxPro - I even had a session about that at the German Developer's Conference in Frankfurt. Observation of recent developments With the recent hype on Javascript and HTML5, especially for Windows 8 and Windows Phone 8 development, I had several 'Deja vu' events... Back in early 2006 (roughly) I had a conversation on the future of Web and Desktop development with my former colleagues Golo Roden and Thomas Wilting about the underestimation of Javascript and its root as a prototype-based, dynamic, full-featured programming language. During this talk with them I took the Mozilla applications, namely Firefox and Thunderbird, as a reference which are mainly based on XML, CSS, Javascript and images - besides the core rendering engine. And that it is very simple to write your own extensions for the Gecko rendering engine. Looking at the Windows Vista Sidebar widgets, just underlines this kind of usage. So, yes the 'Modern UI' of Windows 8 based on HTML5, CSS3 and Javascript didn't come as any surprise to me. Just allow me to ask why did it take so long for Microsoft to come up with this step? A new set of tools Ok, coming from web development in HTML 4, CSS and Javascript prior to Visual FoxPro, I am partly going back to that combination of technologies. What is the other part of the software development stack here at IOS Indian Ocean Software Ltd? Frankly, it is easy and straight forward to describe: Microsoft Visual FoxPro 9.0 SP 2 - still going strong! Visual Studio 2012 (C# on latest .NET Framework) MonoDevelop Telerik DevCraft Suite WPF ASP.NET MVC Windows 8 Kendo UI OpenAccess ORM Reporting JustCode CODE Framework by EPS Software MonoTouch and Mono for Android Subversion and additional tools for the daily routine: Notepad++, JustCode, SQL Compare, DiffMerge, VMware, etc. Following the principles of Clean Code Developer and the Agile Manifesto Actually, nothing special about this combination but rather a solid fundament to work with and create line of business applications for customers.Honestly, I am really interested in your choice of 'weapons' for software development, and hopefully there might be some nice conversations in the comment section. Over the next coming days/weeks I'm going to describe a little bit more in detail about the reasons for my decision. Articles will be added bit by bit here as reference, too. Please bear with me... Regards, JoKi

    Read the article

  • Observations From The Corner of a Starbucks

    - by Chris Williams
    I’ve spent the last 3 days sitting in a Starbucks for 4-8 hours at a time. As a result, I’ve observed a lot of interesting behavior and people (most of whom were uninteresting themselves.) One of the things I’ve noticed is that most people don’t sit down. They come in, get their drink and go. The ones that do sit down, stay much longer than it takes to consume their drink. The drink is just an incidental purchase. Certainly not the reason they are here. Most of the people who sit also have laptops. Probably around 75%. Only a few have kids (with them) but the ones that do, have very small kids. Toddlers or younger. Of all the “campers” only a small percentage are wearing headphone, presumably because A) external noise doesn’t bother them or B) they aren’t working on anything important. My buddy George falls into category A, but he grew up in a house full of people. Silence freaks him out far more than noise. My brother and I, on the other hand, were both only children and don’t handle noisy distractions well. He needs it quiet (like a tomb) and I need music. Go figure… I can listen to Britney Spears mixed with Apoptygma Berzerk and Anthrax and crank out 30 pages, but if your toddler is banging his spoon on the table, you’re getting a dirty look… unless I have music, then all is right with the world. Anyway, enough about me. Most of the people who come in as a group are smiling when they enter. Half as many are smiling when they leave. People who come in alone typically aren’t smiling at all. The average age, over the last three days seems to be early 30s… with a couple of senior citizens and teenagers at either end of the curve. The teenagers almost never stay. They have better stuff to do on a nice day. The senior citizens are split nearly evenly between campers and in&outs. Most of the non-solo campers have 1 person with a laptop, while the other reads the paper or a book. Some campers bring multiple laptops… but only really look at one of them. This Starbucks has a drive through. The line is almost never more than 2-3 cars long but apparently a lot of the in&out people would rather come in and stand in line behind (up to) 5 people. The music in here sucks. My musical tastes can best be described as eclectic to bad, but I can still get work done (see above.) I find the music in this particular Starbucks to be discordant and jarring. At this Starbucks, the coffee lingo is apparently something that is meant to occur between employees only. The nice lady at the counter can handle orders in plain English and translate them to Baristaspeak (Baristese?) quite efficiently. If you order in Baristaspeak however, she will look confused and repeat your order back to you in plain English to confirm you actually meant what you said. Then she will say it in Baristaspeak to the lady making your drink. Nobody in this Starbucks (other than the Baristas) makes eye-contact… at least not with me. Of course that may be indicative of a separate issue. ;)

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • Why I&rsquo;m Getting an iPad

    - by andrewbrust
    I have never purchased an Apple product in my life.  That’s a “true fact.”  And, for that matter, the last Apple product I really wanted was an Apple IIe, back in the 1980s.  I couldn’t afford it though (I was in high school), so I got a Commodore 64 instead…it had the same microprocessor, after all.  If the iPhone were on Verizon, I probably would have picked one up in December, when I got my Droid.  And if the iPod Touch worked with my Napster subscription (which of course it does not, but my Sonos does) I might have picked on of those instead. That’s three strikes, but Apple’s not out.  I’ve decided I want the iPad.  Why?  Well, to start with, my birthday is March 31st…the iPad comes out on April 3rd, and my wife wanted to know what to get me.  Also, my house is a 7-minute walk from the Apple Store on West 14th Street in Manhattan.  This makes it easy to get my pre-ordered device on launch day, and get home quickly with it.  Oh, and I agreed to write an article for Redmond Magazine, the fee for which will pay for the device…that way the birthday present doesn’t have to be an extravagant expense.  Plus, I’m a contrarian, so I want to buy the one device from Apple that the fanboys have actually panned. Think those are bad reasons? How about this: I want to experience iPhone and iPad development and, although my app will probably never hit the App Store and run on the actual device, I still think owning one will help me develop something better.  i want to see if the slate form factor has good business usage scenarios.  I want to see if Business Intelligence technology on a device like this can work.  Imagine a dashboard on this thing. And, for the consumer experience, I really want a touch device on which I can surf the Web while I’m in the kitchen, or on the couch.  I don’t want the small form factor of my phone, I don’t want to use my TV, and I don’t want a keyboard that will get dirty or in my way. I don’t want to watch movies on it (my TV is good for that), so I don’t care that the iPad has a 4:3 screen.  I don’t want to read books on it, so I don’t care that the display is backlit LCD, rather than eInk. But really what I want is to understand, first hand, why people have such brand loyalty to Apple.  I know the big reasons; I’m not detached from society.  But I want to know the subtle points of what Apple does really well, and also what they do poorly.  And I’d like to know, once and for all, if Microsoft can beat Apple, if Microsoft can think the right way to beat Apple and if Microsoft should  even try to beat Apple. I expect to share my thoughts on these questions, as they develop.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Live Event: OTN Architect Day: Cloud Computing - Two weeks and counting

    - by Bob Rhubart
    In just two weeks architects and others will gather at the Oracle Conference Center in Redwood Shores, CA for the first Oracle Technology Network Architect Day event of 2013. This event focuses on Cloud Computing, and features sessions specifically focused on real-world examples of the implementation of cloud computing. When: Tuesday July 9, 2013              8:30am - 12:30pm Where: Oracle Conference Center              350 Oracle Pkwy              Redwood City, CA 94065 Register now. It's free! Here's the agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Michael Timpanaro-Perrotta Director, Product Management, Oracle Database Cloud New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Michael Timpanaro-Perrotta respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • NHibernate Pitfalls: Custom Types and Detecting Changes

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. NHibernate supports the declaration of properties of user-defined types, that is, not entities, collections or primitive types. These are used for mapping a database columns, of any type, into a different type, which may not even be an entity; think, for example, of a custom user type that converts a BLOB column into an Image. User types must implement interface NHibernate.UserTypes.IUserType. This interface specifies an Equals method that is used for comparing two instances of the user type. If this method returns false, the entity is marked as dirty, and, when the session is flushed, will trigger an UPDATE. So, in your custom user type, you must implement this carefully so that it is not mistakenly considered changed. For example, you can cache the original column value inside of it, and compare it with the one in the other instance. Let’s see an example implementation of a custom user type that converts a Byte[] from a BLOB column into an Image: 1: [Serializable] 2: public sealed class ImageUserType : IUserType 3: { 4: private Byte[] data = null; 5: 6: public ImageUserType() 7: { 8: this.ImageFormat = ImageFormat.Png; 9: } 10: 11: public ImageFormat ImageFormat 12: { 13: get; 14: set; 15: } 16: 17: public Boolean IsMutable 18: { 19: get 20: { 21: return (true); 22: } 23: } 24: 25: public Object Assemble(Object cached, Object owner) 26: { 27: return (cached); 28: } 29: 30: public Object DeepCopy(Object value) 31: { 32: return (value); 33: } 34: 35: public Object Disassemble(Object value) 36: { 37: return (value); 38: } 39: 40: public new Boolean Equals(Object x, Object y) 41: { 42: return (Object.Equals(x, y)); 43: } 44: 45: public Int32 GetHashCode(Object x) 46: { 47: return ((x != null) ? x.GetHashCode() : 0); 48: } 49: 50: public override Int32 GetHashCode() 51: { 52: return ((this.data != null) ? this.data.GetHashCode() : 0); 53: } 54: 55: public override Boolean Equals(Object obj) 56: { 57: ImageUserType other = obj as ImageUserType; 58: 59: if (other == null) 60: { 61: return (false); 62: } 63: 64: if (Object.ReferenceEquals(this, other) == true) 65: { 66: return (true); 67: } 68: 69: return (this.data.SequenceEqual(other.data)); 70: } 71: 72: public Object NullSafeGet(IDataReader rs, String[] names, Object owner) 73: { 74: Int32 index = rs.GetOrdinal(names[0]); 75: Byte[] data = rs.GetValue(index) as Byte[]; 76: 77: this.data = data as Byte[]; 78: 79: if (data == null) 80: { 81: return (null); 82: } 83: 84: using (MemoryStream stream = new MemoryStream(this.data ?? new Byte[0])) 85: { 86: return (Image.FromStream(stream)); 87: } 88: } 89: 90: public void NullSafeSet(IDbCommand cmd, Object value, Int32 index) 91: { 92: if (value != null) 93: { 94: Image data = value as Image; 95: 96: using (MemoryStream stream = new MemoryStream()) 97: { 98: data.Save(stream, this.ImageFormat); 99: value = stream.ToArray(); 100: } 101: } 102: 103: (cmd.Parameters[index] as DbParameter).Value = value ?? DBNull.Value; 104: } 105: 106: public Object Replace(Object original, Object target, Object owner) 107: { 108: return (original); 109: } 110: 111: public Type ReturnedType 112: { 113: get 114: { 115: return (typeof(Image)); 116: } 117: } 118: 119: public SqlType[] SqlTypes 120: { 121: get 122: { 123: return (new SqlType[] { new SqlType(DbType.Binary) }); 124: } 125: } 126: } In this case, we need to cache the original Byte[] data because it’s not easy to compare two Image instances, unless, of course, they are the same.

    Read the article

  • jQuery - discrepency between classname and selectors

    - by Ciel
    I have the following code that I wrote, which I personally found to be pretty nice. It takes a <ul> and it drops down the contents when clicked. But I am having a disconnect here in comprehension, and one I had to do what I feel is a 'dirty hack' to solve. The problem is that I do not want the class `'sidebar-dropdown-open' to be so 'hardwired' in the plugin. However I discovered that there is a very stark difference between... $('.sidebar-dropdown-open') and 'sidebar-dropdown-open and even '.sidebar-dropdown-open. I 'solved' this problem by including two different 'parameters' in my plugin, but I was wondering if someone might give me some insight as to how I could perform this better, and why this was behaving this way. wiring (document load) $(document).ready(function () { $('[data-role="sidebar-dropdown"]').drawer({ open: 'sidebar-dropdown-open', css: '.sidebar-dropdown-open' }); }); html <ul> <li class=" dropdown" data-role="sidebar-dropdown"> <a href="pages/.." class="remote">Link Text</a> <ul class="sub-menu light sidebar-dropdown-menu"> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> <li><a class="remote" href="pages/...">Link Text</a></li> </ul> </li> </ul> javascript (function ($) { $.fn.drawer = function (options) { // Create some defaults, extending them with any options that were provided var settings = $.extend({ open: 'open', css: '.open' }, options); return this.each(function () { $(this).on('click', function (e) { // slide up all open dropdown menus $(settings.css).not($(this)).each(function () { $(this).removeClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideUp('fast'); $menu.removeClass('active'); }); // mark this menu as open $(this).addClass(settings.open); // retrieve the appropriate menu item var $menu = $(this).children(".dropdown-menu, .sidebar-dropdown-menu"); // slide down the one clicked on. $menu.slideDown(100); $menu.addClass('active'); e.preventDefault(); e.stopPropagation(); }).on("mouseleave", function () { $(this).children(".dropdown-menu").hide().delay(300); }); }) }; })(jQuery); I have tried using settings.open and demanding that it just be a className (.open), etc. - but that does not seem to work. It seems to get ignored by the removeClass function.

    Read the article

  • How to parse a CSV file containing serialized PHP? [migrated]

    - by garbetjie
    I've just started dabbling in Perl, to try and gain some exposure to different programming languages - so forgive me if some of the following code is horrendous. I needed a quick and dirty CSV parser that could receive a CSV file, and split it into file batches containing "X" number of CSV lines (taking into account that entries could contain embedded newlines). I came up with a working solution, and it was going along just fine. However, as one of the CSV files that I'm trying to split, I came across one that contains serialized PHP code. This seems to break the CSV parsing. As soon as I remove the serialization - the CSV file is parsed correctly. Are there any tricks I need to know when it comes to parsing serialized data in CSV files? Here is a shortened sample of the code: use strict; use warnings; my $csv = Text::CSV_XS->new({ eol => $/, always_quote => 1, binary => 1 }); my $out; my $in; open $in, "<:encoding(utf8)", "infile.csv" or die("cannot open input file $inputfile"); open $out, ">outfile.000"; binmode($out, ":utf8"); while (my $line = $csv->getline($in)) { $lines++; $csv->print($out, $line); } I'm never able to get into the while loop shown above. As soon as I remove the serialized data, I suddenly am able to get into the loop. Edit: An example of a line that is causing me trouble (taken straight from Vim - hence the ^M): "26","other","1","20,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","37","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","6","1" "27","other","1","35,000 Subscriber Plan","Some test here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","38","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","7","1" "28","other","1","50,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","39","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","8","1""73","other","8","10,000,000","","","","0","","0","","0","0","recurring","0","","payment","","0","","","75","0","10000000","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:17:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","14","0"

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • Ubuntu suddenly freezes

    - by tapan
    I've a strange problem with my ubuntu 10.04 installation. Whenever i boot into ubuntu the entire system freezes / hangs soon after (~ 2 mins in). This problem exists on my windows 7 installation too. However if i start World of Warcraft or Warcraft on windows it doesnt hang for the duration i'm playing the game. After i stop playing and exit the game my laptop hangs inside 2 mins. Here is when it gets weirder. If i disconnect the charger, the laptop doesn't hang. However when I start it in ubuntu recovery mode and drop to root shell and use the 'startx' command everything works perfectly. I cannot figure out what the problem is. i have an intel core2duo 2.2ghz processor, intel mobile 965 graphics, 2 GB RAM for more details here is the output of cat /proc/cpuinfo : processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz stepping : 11 cpu MHz : 2201.000 cache size : 4096 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida tpr_shadow vnmi flexpriority bogomips : 4389.80 clflush size : 64 power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz stepping : 11 cpu MHz : 2201.000 cache size : 4096 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida tpr_shadow vnmi flexpriority bogomips : 4388.96 clflush size : 64 power management: here is the output of cat /proc/meminfo MemTotal: 2052440 kB MemFree: 55924 kB Buffers: 579352 kB Cached: 821752 kB SwapCached: 704 kB Active: 897124 kB Inactive: 1032256 kB Active(anon): 412140 kB Inactive(anon): 264804 kB Active(file): 484984 kB Inactive(file): 767452 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 1178440 kB HighFree: 6012 kB LowTotal: 874000 kB LowFree: 49912 kB SwapTotal: 995988 kB SwapFree: 986616 kB Dirty: 8928 kB Writeback: 0 kB AnonPages: 527596 kB Mapped: 76536 kB Slab: 39480 kB SReclaimable: 21100 kB SUnreclaim: 18380 kB PageTables: 5672 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2022208 kB Committed_AS: 1856400 kB VmallocTotal: 122880 kB VmallocUsed: 11928 kB VmallocChunk: 104644 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 892928 kB Also the kern.log doesn't show any errors. What I want to know is what might be the problem, how i could test for it and if there are any solutions I could try. Thanks :).

    Read the article

  • swapping or thrashing with vast amounts of unmapped pagecache

    - by Marco
    EDIT: I noticed that this is more appropriate for superuser.com, I apologize. I don't know how to delete this question. I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • Ubuntu/Windows suddenly freezes

    - by tapan
    I've a strange problem with my ubuntu 10.04 installation. Whenever i boot into ubuntu the entire system freezes / hangs soon after (~ 2 mins in). This problem exists on my windows 7 installation too. However if i start World of Warcraft or Warcraft on windows it doesnt hang for the duration i'm playing the game. After i stop playing and exit the game my laptop hangs inside 2 mins. Here is when it gets weirder. If i disconnect the charger, the laptop doesn't hang. However when I start it in ubuntu recovery mode and drop to root shell and use the 'startx' command everything works perfectly. I cannot figure out what the problem is. i have an intel core2duo 2.2ghz processor, intel mobile 965 graphics, 2 GB RAM for more details here is the output of cat /proc/cpuinfo : processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz stepping : 11 cpu MHz : 2201.000 cache size : 4096 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida tpr_shadow vnmi flexpriority bogomips : 4389.80 clflush size : 64 power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz stepping : 11 cpu MHz : 2201.000 cache size : 4096 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida tpr_shadow vnmi flexpriority bogomips : 4388.96 clflush size : 64 power management: here is the output of cat /proc/meminfo MemTotal: 2052440 kB MemFree: 55924 kB Buffers: 579352 kB Cached: 821752 kB SwapCached: 704 kB Active: 897124 kB Inactive: 1032256 kB Active(anon): 412140 kB Inactive(anon): 264804 kB Active(file): 484984 kB Inactive(file): 767452 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 1178440 kB HighFree: 6012 kB LowTotal: 874000 kB LowFree: 49912 kB SwapTotal: 995988 kB SwapFree: 986616 kB Dirty: 8928 kB Writeback: 0 kB AnonPages: 527596 kB Mapped: 76536 kB Slab: 39480 kB SReclaimable: 21100 kB SUnreclaim: 18380 kB PageTables: 5672 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2022208 kB Committed_AS: 1856400 kB VmallocTotal: 122880 kB VmallocUsed: 11928 kB VmallocChunk: 104644 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 892928 kB Also the kern.log doesn't show any errors. What I want to know is what might be the problem, how i could test for it and if there are any solutions I could try. Thanks :).

    Read the article

  • Using a 64bit Linux kernel, can't see more than 4GB of RAM in /proc/meminfo

    - by Chris Huang-Leaver
    I'm running my new computer which has 8GB of RAM installed, which is visable from BIOS page, does not show in /proc/meminfo uname -a Linux localhost 3.0.6-gentoo #2 SMP PREEMPT Sat Nov 19 10:45:22 GMT-- x86_64 AMD Phenom(tm) II X4 955 Processor AuthenticAMD GNU/Linux The result of /proc/meminfo is as follows: (thans Andrey) MemTotal: 4021348 kB MemFree: 1440280 kB Buffers: 23696 kB Cached: 1710828 kB SwapCached: 4956 kB Active: 1389904 kB Inactive: 841364 kB Active(anon): 1337812 kB Inactive(anon): 714060 kB Active(file): 52092 kB Inactive(file): 127304 kB Unevictable: 32 kB Mlocked: 32 kB SwapTotal: 8388604 kB SwapFree: 8047900 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 492732 kB Mapped: 47528 kB Shmem: 1555120 kB Slab: 267724 kB SReclaimable: 177464 kB SUnreclaim: 90260 kB KernelStack: 1176 kB PageTables: 12148 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 10399276 kB Committed_AS: 3293896 kB VmallocTotal: 34359738367 kB VmallocUsed: 317008 kB VmallocChunk: 34359398908 kB AnonHugePages: 120832 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 23552 kB DirectMap2M: 3088384 kB DirectMap1G: 1048576 kB I have tried using mem=8G as a kernel boot parameter, I read a post about setting HIGHMEM64G to yes, before realising that only applies to 32bit kernels. Trying dmindecode -t memory SMBIOS 2.7 present. Handle 0x0026, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 32 GB Error Information Handle: Not Provided Number Of Devices: 4 Handle 0x0028, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: DIMM0 Bank Locator: BANK0 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz Manufacturer: Manufacturer0 Serial Number: SerNum0 Asset Tag: AssetTagNum0 Part Number: Array1_PartNumber0 Rank: Unknown Handle 0x002A, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM1 Bank Locator: BANK1 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer1 Serial Number: SerNum1 Asset Tag: AssetTagNum1 Part Number: Array1_PartNumber1 Rank: Unknown Handle 0x002C, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: 64 bits Data Width: 64 bits Size: 4096 MB Form Factor: DIMM Set: None Locator: DIMM2 Bank Locator: BANK2 Type: <OUT OF SPEC> Type Detail: Synchronous Speed: 1333 MHz Manufacturer: Manufacturer2 Serial Number: SerNum2 Asset Tag: AssetTagNum2 Part Number: Array1_PartNumber2 Rank: Unknown Handle 0x002E, DMI type 17, 34 bytes Memory Device Array Handle: 0x0026 Error Information Handle: Not Provided Total Width: Unknown Data Width: 64 bits Size: No Module Installed Form Factor: DIMM Set: None Locator: DIMM3 Bank Locator: BANK3 Type: Unknown Type Detail: Synchronous Speed: Unknown Manufacturer: Manufacturer3 Serial Number: SerNum3 Asset Tag: AssetTagNum3 Part Number: Array1_PartNumber3 Rank: Unknown

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • umount bind of stale NFS

    - by Paul Eisner
    i've got a problem removing mounts created with mount -o bind from a locally mounted NFS folder. Assume the following mount structure: NFS mounted directory: $ mount -o rw,soft,tcp,intr,timeo=10,retrans=2,retry=1 \ 10.20.0.1:/srv/source /srv/nfs-source Bound directory: $ mount -o bind /srv/nfs-source/sub1 /srv/bind-target/sub1 Which results in this mount map $ mount /dev/sda1 on / type ext3 (rw,errors=remount-ro) # ... 10.20.0.1:/srv/source on /srv/nfs-source type nfs (rw,soft,tcp,intr,timeo=10,retrans=2,retry=1,addr=10.20.0.100) /srv/nfs-source/sub1 on /srv/bind-target/sub1 type none (rw,bind) If the server (10.20.0.1) goes down (eg ifdown eth0), the handles become stale, which is expected. I can now un-mount the NFS mount with force $ umount -f /srv/nfs-source This takes some seconds, but works without any problems. However, i cannot un-mount the bound directory in /srv/bind-target/sub1. The forced umount results in: $ umount -f /srv/bind-target/sub1 umount2: Stale NFS file handle umount: /srv/bind-target/sub1: Stale NFS file handle umount2: Stale NFS file handle Here is a trace http://pastebin.com/ipvvrVmB I've tried umounting the sub-directories beforehand, find any processes accessing anything within the NFS or bind mounts (there are none). lsof also complains: $ lsof -n lsof: WARNING: can't stat() nfs file system /srv/nfs-source Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/sub1 (deleted) Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/ Output information may be incomplete. I've tried with recent stable Linux kernels 3.2.17, 3.2.19 and 3.3.8 (cannot use 3.4.x, cause need the grsecurity patch, which is not, yet, supported - grsecurity is not patched in in the tests above!). My nfs-utils are version 1.2.2 (debian stable). Does anybody have an idea how i can either: force the un-mount some other way? (any dirty trick is welcome, data loss or damage neglible at this point) use something else instead of mount -o bind? (cannot use soft links, cause mounted directories will be used in chroot; bindfs via FUSE is far to slow to be an option) Thanks, Paul Update 1 With 2.6.32.59 the umount of the (stale) sub-mounts work just fine. It seems to be a kernel regression bug. The above tests where with NFSv3. Additional tests with NFSv4 showed no change. Update 2 We have tested now multiple 2.6 and 3.x kernels and are now sure, that this was introduced in 3.0.x. We will fille a bug report, hopefully they figure it out.

    Read the article

  • Excluding child processes from ps

    - by stefpet
    Background: To reload app configuration I need to kill -HUP the parent processes' PIDs. To find PIDs I currently use ps auxf | grep gunicorn with the following example output: $ ps auxf | grep gunicorn stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4225 0.0 0.4 76920 16332 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4226 0.0 0.4 76932 16340 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4227 0.0 0.4 76940 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4228 0.0 0.4 76948 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4229 0.0 0.4 76960 16356 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4230 0.0 0.4 76972 16368 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4231 0.0 0.4 78856 18644 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4232 0.0 0.4 76992 16376 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5685 0.0 0.0 22076 908 pts/1 S+ 11:50 0:00 | \_ grep --color=auto gunicorn stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5021 0.0 0.4 77656 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5022 0.0 0.4 77664 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5023 0.0 0.4 77672 17164 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5024 0.0 0.4 77684 17196 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5025 0.0 0.4 77692 17200 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5026 0.0 0.4 77700 17208 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5027 0.0 0.4 77712 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5028 0.0 0.4 77720 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py Based on the above I see that it is 4222 and 5012 I need to HUP. Question: How can I exclude the child processes and only get the parent process (please note however that the processes I want do also have a parent (e.g. bash) that I'm uninterested with)? Using a regexp with grep on how much indentation there is in the ascii tree feels dirty. Is there a better way? Example: The desired output would be something like this. stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py This would be easily parseable to be able to automatically find the PIDs in a script that does the HUPing which is the goal.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34  | Next Page >