Search Results

Search found 4603 results on 185 pages for 'lower bound'.

Page 85/185 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

  • Performance due to entity update

    - by Rizzo
    I always think about 2 ways to code the global Step() function, both with pros and cons. Please note that AIStep is just to provide another more step for whoever who wants it. // Approach 1 step foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_fixed_step ) entity.FixedStep(); if( time_for_AI_step ) entity.AIStep(); ... // all kind of updates you want } PRO: you just have to iterate once over all entities. CON: fidelity could be lower at some scenarios, since the entity.FixedStep() isn't going all at a time. // Approach 2 step foreach( entity in entities ) entity.DeltaStep( delta_time ); if( time_for_fixed_step ) foreach( entity in entities ) entity.FixedStep(); if( time_for_AI_step ) foreach( entity in entities ) entity.FixedStep(); // all kind of updates you want SEPARATED PRO: fidelity on FixedStep is higher, shouldn't be much time between all entities update, rather than Approach 1 where you may have to wait other updates until FixedStep() comes. CON: you iterate once for each kind of update. Also, a third approach could be a hybrid between both of them, something in the way of foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_AI_step ) entity.AIStep(); // all kind of updates you want BUT FixedStep() } if( time_for_fixed_step ) { foreach( entity in entities ) { entity.FixedStep(); } } Just two loops, don't caring about time fidelity in nothing other than at FixedStep(). Any thoughts on this matter? Should it really matters to make all steps at once or am I thinking on problems that don't exist?

    Read the article

  • Webcast: Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory

    - by Etienne Remillon
    Interested in upgrading from DSEE to OUD? Register to learn from one customer. Oracle Security Solutions Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory Oracle Unified Directory (OUD) is the world’s first unified directory solution with highly integrated storage, synchronization, and proxy capabilities. These capabilities help meet the evolving needs of enterprise architectures. OUD customers can lower the cost of administration and ownership by maintaining a single directory for all of their enterprise needs, while also simplifying their enterprise architecture. OUD is optimized for mobile and cloud computing environments where elastic scalability becomes critical as service providers need a solution that can scale by dynamically adding more directory instances without re-architecting their solutions to support exponential business growth. Join us for this webcast and you will: Learn from one customer that has successfully upgraded to the new platform See what technology and business drivers influenced the upgrade Hear about the benefits of OUD’s elastic scalability and unparalleled performance Get additional information and resources for planning an upgrade Register here for the webcast. REGISTER NOW Register now for this complimentary webcast: Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory Thursday September 13, 2012 10:00 a.m. PT / 1:00 p.m. ET

    Read the article

  • Should Professional Development occur on company time?

    - by jshu
    As a first-time part-time software developer at a small consulting company, I'm struggling to organise time to further my own software development knowledge - whether that's reading a book, keeping up with the popular questions on StackOverflow, researching a technology we're using in-depth, or following the front page of Hacker News. I can see results borne from my self-allocated study time, but listing and demonstrating the skills and knowledge gained through Professional Development is difficult. The company does not have any defined PD policy, and there's a lot of pressure to get something deliverable done now! when working for consultants. I've checked what my coworkers do, and they don't appear to allocate any time to self-improvement; they just work at the problems they're given, looking up specific MSDN references, code samples, and the like as they need them. I realise that PD policy is going to vary across companies of different size and culture, and a company like my own is probably a bit of an edge case. I'd love to hear views and experiences from more seasoned developers; especially those who have to make the PD policy choices in their team or company. I'd also like to learn about the more radical approaches to PD, even if they're completely out there; it's always interesting to see what other people are trying. Not quite a summary, but what I'm trying to ask: Is it common or recommended for companies to allocate PD time? Whose responsibility is it to ensure a developer's knowledge and skills are up to date? Should a part-time work schedule inspire a lower ratio of PD time : work? How can a developer show non-developer coworkers that reading blogs and books is net productive? Is reading blogs and books actually net productive? (references welcomed) Is writing blogs effective as a way of PD? (a recent theme on Hacker News) This is sort of a broad question because I don't know exactly which questions I need to ask here, so any thoughts on relevant issues I haven't addressed are very welcome.

    Read the article

  • Exalytics Increases Customer Revenue, and Saves Time, Risk & Cost

    - by Mike.Hallett(at)Oracle-BI&EPM
    We are getting some great proof point stories now from our customers who are succeeding with the Exalytics in-memory system for OBI and Essbase.  See below for some recent testimony: San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings: according to independent assessment by Mainstay Salire, the district is on track to achieve substantial benefits from the Oracle Exalytics solution, including an $8.25 million increase in attendance revenue, $75,000 a year savings in operational efficiencies, and $1 million in hardware cost avoidance. NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI: it took only “3 days to get up and running” with Exalytics.  Video Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: “it was up and running within 4 days” with “more intuitive dashboards” and “up to 70x better performance” and “cheaper maintenance and lower total cost of ownership”. Video Sodexo chose Oracle Exalytics as their business analytics platform; accelerating Essbase “more than 8x” performance for more than 2,000 Excel-addin users, “significantly changing how people in information management now deal with data”.  Video Polk, Savvis, Nykredit, and Key Energy describe testing of the Oracle Exalytics In-Memory Machine: to “reach more users than we ever have before”, “to fly through the data without impeding the analytic process”, “drive our enterprise groups into this tool instead of having departmental solutions”, and the “advanced visualisation this product enables”.  Video

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • MVP Pattern Philsophical Question - Security Checking in UI

    - by Brian
    Hello, I have a philosophical question about the MVP pattern: I have a component that checks whether a user has access to a certain privilege. This privilege turns on or off certain UI features. For instance, suppose you have a UI grid, and for each row that gets bound, I do a security check to see if certain features in the grid should be enabled or disabled. There are two ways to do this: have the UI/view call the component's method, determine if it has access, and enable/disable or show/hide. The other is have the view fire an event to the presenter, have the presenter do the check and return the access back down to the view through the model or through the event arg. As per the MVP pattern, which component should security checks fit into, the presenter or the view? Since the view is using it to determine its accessibility, it seems more fitting in the view, but it is doing database checks and all inside this business component, and there is business logic there, so I can see the reverse argument too. Thoughts? Thanks.

    Read the article

  • How to blend multiple normal maps?

    - by János Turánszki
    I want to achieve a distortion effect which distorts the full screen. For that I spawn a couple of images with normal maps. I render their normal map part on some camera facing quads onto a rendertarget which is cleared with the color (127,127,255,255). This color means that there is no distortion whatsoever. Then I want to render some images like this one onto it: If I draw one somewhere on the screen, then it looks correct because it blends in seamlessly with the background (which is the same color that appears on the edges of this image). If I draw another one on top of it then it will no longer be a seamless transition. For this I created a blendstate in directX 11 that keeps the maximum of two colors, so it is now a seamless transition, but this way, the colors lower than 127 (0.5f normalized) will not contribute. I am not making a simulation and the effect looks quite convincing and nice for a game, but in my spare time I am thinking how I could achieve a nicer or a more correct effect with a blend state, maybe averaging the colors somehow? I I did it with a shader, I would add the colors and then I would normalize them, but I need to combine arbitrary number of images onto a rendertarget. This is my blend state now which blends them seamlessly but not correctly: D3D11_BLEND_DESC bd; bd.RenderTarget[0].BlendEnable=true; bd.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; bd.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; bd.RenderTarget[0].BlendOp = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE; bd.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO; bd.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_MAX; bd.RenderTarget[0].RenderTargetWriteMask = 0x0f; Is there any way of improving upon this? (PS. I considered rendering each one with a separate shader incementally on top of each other but that would consume a lot of render targets which is unacceptable)

    Read the article

  • Positioning a sprite in XNA: Use ClientBounds or BackBuffer?

    - by Martin Andersson
    I'm reading a book called "Learning XNA 4.0" written by Aaron Reed. Throughout most of the chapters, whenever he calculates the position of a sprite to use in his call to SpriteBatch.Draw, he uses Window.ClientBounds.Width and Window.ClientBounds.Height. But then all of a sudden, on page 108, he uses PresentationParameters.BackBufferWidth and PresentationParameters.BackBufferHeight instead. I think I understand what the Back Buffer and the Client Bounds are and the difference between those two (or perhaps not?). But I'm mighty confused about when I should use one or the other when it comes to positioning sprites. The author uses for the most part Client Bounds both for checking whenever a moving sprite is of the screen and to find a spawn point for new sprites. However, he seems to make two exceptions from this pattern in his book. The first time is when he wants some animated sprites to "move in" and cross the screen from one side to another (page 108 as mentioned). The second and last time is when he positions a texture to work as a button in the lower right corner of a Windows Phone 7 screen (page 379). Anyone got an idea? I shall provide some context if it is of any help. Here's how he usually calls SpriteBatch.Draw (code example from where he positions a sprite in the middle of the screen [page 35]): spriteBatch.Draw(texture, new Vector2( (Window.ClientBounds.Width / 2) - (texture.Width / 2), (Window.ClientBounds.Height / 2) - (texture.Height / 2)), null, Color.White, 0, Vector2.Zero, 1, SpriteEffects.None, 0); And here is the first case of four possible in a switch statement that will set the position of soon to be spawned moving sprites, this position will later be used in the SpriteBatch.Draw call (page 108): // Randomly choose which side of the screen to place enemy, // then randomly create a position along that side of the screen // and randomly choose a speed for the enemy switch (((Game1)Game).rnd.Next(4)) { case 0: // LEFT to RIGHT position = new Vector2( -frameSize.X, ((Game1)Game).rnd.Next(0, Game.GraphicsDevice.PresentationParameters.BackBufferHeight - frameSize.Y)); speed = new Vector2(((Game1)Game).rnd.Next( enemyMinSpeed, enemyMaxSpeed), 0); break;

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • SQL query. An unusual join. DB implemented in sqlite-3

    - by user02814
    This is essentially a question about constructing an SQL query. The db is implemented with sqlite3. I am a relatively new user of SQL. I have two tables and want to join them in an unusual way. The following is an example to explain the problem. Table 1 (t1): id year name ------------------------- 297 2010 Charles 298 2011 David 300 2010 Peter 301 2011 Richard Table 2 (t2) id year food --------------------------- 296 2009 Bananas 296 2011 Bananas 297 2009 Melon 297 2010 Coffee 297 2012 Cheese 298 2007 Sugar 298 2008 Cereal 298 2012 Chocolate 299 2000 Peas 300 2007 Barley 300 2011 Beans 300 2012 Chickpeas 301 2010 Watermelon I want to join the tables on id and year. The catch is that (1) id must match exactly, but if there is no exact match in Table 2 for the year in Table 1, then I want to choose the year that is the next (lower) available. A selection of the kind that I want to produce would give the following result id year matchyr name food ------------------------------------------------- 297 2010 2010 Charles Coffee 298 2011 2008 David Cereal 300 2010 2007 Peter Barley 301 2011 2010 Richard Watermelon To summarise, id=297 had an exact match for year=2010 given in Table 1, so the corresponding line for id=297, year=2010 is chosen from Table 2. id=298, year=2011 did not have a matching year in Table 2, so the next available year (less than 2011) is chosen. As you can see, I would also like to know what that matched year (whether exactly , or inexactly) actually was. I would very much appreciate (1) an indication (yes/no answer) of whether this is possible to do in SQL alone, or whether I need to look outside SQL, and (2) a solution, if that is not too onerous.

    Read the article

  • Why am I not getting an sRGB default framebuffer?

    - by Aaron Rotenberg
    I'm trying to make my OpenGL Haskell program gamma correct by making appropriate use of sRGB framebuffers and textures, but I'm running into issues making the default framebuffer sRGB. Consider the following Haskell program, compiled for 32-bit Windows using GHC and linked against 32-bit freeglut: import Foreign.Marshal.Alloc(alloca) import Foreign.Ptr(Ptr) import Foreign.Storable(Storable, peek) import Graphics.Rendering.OpenGL.Raw import qualified Graphics.UI.GLUT as GLUT import Graphics.UI.GLUT(($=)) main :: IO () main = do (_progName, _args) <- GLUT.getArgsAndInitialize GLUT.initialDisplayMode $= [GLUT.SRGBMode] _window <- GLUT.createWindow "sRGB Test" -- To prove that I actually have freeglut working correctly. -- This will fail at runtime under classic GLUT. GLUT.closeCallback $= Just (return ()) glEnable gl_FRAMEBUFFER_SRGB colorEncoding <- allocaOut $ glGetFramebufferAttachmentParameteriv gl_FRAMEBUFFER gl_FRONT_LEFT gl_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING print colorEncoding allocaOut :: Storable a => (Ptr a -> IO b) -> IO a allocaOut f = alloca $ \ptr -> do f ptr peek ptr On my desktop (Windows 8 64-bit with a GeForce GTX 760 graphics card) this program outputs 9729, a.k.a. gl_LINEAR, indicating that the default framebuffer is using linear color space, even though I explicitly requested an sRGB window. This is reflected in the rendering results of the actual program I'm trying to write - everything looks washed out because my linear color values aren't being converted to sRGB before being written to the framebuffer. On the other hand, on my laptop (Windows 7 64-bit with an Intel graphics chip), the program prints 0 (huh?) and I get an sRGB default framebuffer by default whether I request one or not! And on both machines, if I manually create a non-default framebuffer bound to an sRGB texture, the program correctly prints 35904, a.k.a. gl_SRGB. Why am I getting different results on different hardware? Am I doing something wrong? How can I get an sRGB framebuffer consistently on all hardware and target OSes?

    Read the article

  • Displaying Exceptions Thrown or Caught in Managed Beans

    - by Frank Nimphius
    Just came a cross a sample written by Steve Muench, which somewhere deep in its implementation details uses the following code to route exceptions to the ADF binding layer to be handled by the ADF model error handler (which can be customized by overriding the DCErrorHandlerImpl class and configuring the custom class in DataBindings.cpx file) To route an exception to the ADFm error handler, Steve used the following code ((DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry()).reportException(ex); The same code however can be used in managed beans as well to enforce consistent error handling in ADF. As an example, lets assume a managed bean method hits an exception. To simulate this, let's use the following code: public void onToolBarButtonAction(ActionEvent actionEvent) {    throw new JboException("Just to tease you !!!!!");        } The exception shows at runtime as displayed in the following image: Assuming a try-catch block is used to intercept the exception caused by a managed bean action, you can route the error message display to the ADF model error handler. Again, let's simulate the code that would need to go into a try-catch block public void onToolBarButtonAction(ActionEvent actionEvent) {    JboException ex = new JboException("Just to tease you !!!!!");  BindingContext bctx = BindingContext.getCurrent();    ((DCBindingContainer)bctx.getCurrentBindingsEntry()).reportException(ex); } The error now displays as shown in the image below As you can see, the error is now handled by the ADFm Error handler, which - as mentioned before - could be a custom error handler. Using the ADF model error handling for displaying exceptions thrown in managed beans require the current ADF Faces page to have an associated PageDef file (which is the case if the page or view contains ADF bound components). Note that to invoke methods exposed on the business service it is recommended to always work through the binding layer (method binding) so that in case of an error the ADF model error handler is automatically used.

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • How do I draw a point sprite using OpenGL ES on Android?

    - by nbolton
    Edit: I'm using the GL enum, which is incorrect since it's not part of OpenGL ES (see my answer). I should have used GL10, GL11 or GL20 instead. Here's a few snippets of what I have so far... void create() { renderer = new ImmediateModeRenderer(); tiles = Gdx.graphics.newTexture( Gdx.files.getFileHandle("res/tiles2.png", FileType.Internal), TextureFilter.MipMap, TextureFilter.Linear, TextureWrap.ClampToEdge, TextureWrap.ClampToEdge); } void render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glClearColor(0.6f, 0.7f, 0.9f, 1); } void renderSprite() { int handle = tiles.getTextureObjectHandle(); Gdx.gl.glBindTexture(GL.GL_TEXTURE_2D, handle); Gdx.gl.glEnable(GL.GL_POINT_SPRITE); Gdx.gl11.glTexEnvi(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); renderer.begin(GL.GL_POINTS); renderer.vertex(pos.x, pos.y, pos.z); renderer.end(); } create() is called once when the program starts, and renderSprites() is called for each sprite (so, pos is unique to each sprite) where the sprites are arranged in a sort-of 3D cube. Unfortunately though, this just renders a few white dots... I suppose that the texture isn't being bound which is why I'm getting white dots. Also, when I draw my sprites on anything other than 0 z-axis, they do not appear -- I read that I need to crease my zfar and znear, but I have no idea how to do this using libgdx (perhaps it's because I'm using ortho projection? What do I use instead?). I know that the texture is usable, since I was able to render it using a SpriteBatch, but I guess I'm not using it properly with OpenGL.

    Read the article

  • Choosing Technology To Include In Software Design

    How many of us have been forced to select one technology over another when designing a new system? What factors do we and should we consider? How can we ensure the correct business decision is made? When faced with this type of decision it is important to gather as much information possible regarding each technology being considered as well as the project itself. Additionally, I tend to delay my decision about the technology until it is ultimately necessary to be made. The reason why I tend to delay such an important design decision is due to the fact that as the project progresses requirements and other factors can alter a decision for selecting the best technology for a project. Important factors to consider when making technology decisions: Time to Implement and Maintain Total Cost of Technology (including Implementation and maintenance) Adaptability of Technology Implementation Team’s Skill Sets Complexity of Technology (including Implementation and maintenance) orecasted Return On Investment (ROI) Forecasted Profit on Investment (POI) Of the factors to consider the ROI and POI weigh the heaviest because the take in to consideration the other factors when calculating the profitability and return on investments.For a real world example let us consider developing a web based lead management system for a new company. This system can either be hosted on Microsoft Windows based web server or on a Linux based web server. Important Factors for this Example Implementation Team’s Skill Sets Member 1  Skill Set: Classic ASP, ASP.Net, and MS SQL Server Experience: 10 years Member 2  Skill Set: PHP, MySQL, Photoshop and MS SQL Server Experience: 3 years Member 3  Skill Set: C++, VB6, ASP.Net, and MS SQL Server Experience: 12 years Total Cost of Technology (including Implementation and maintenance) Linux Initial Year: $5,000 (Random Value) Additional Years: $3,000 (Random Value) Windows Initial Year: $10,000 (Random Value) Additional Years: $3,000 (Random Value) Complexity of Technology Linux Large Learning Curve with user driven documentation Estimated learning cost: $30,000 Windows Minimal based on Teams skills with Microsoft based documentation Estimated learning cost: $5,000 ROI Linux Total Cost Initial Total Cost: $35,000 Additional Cost $3,000 per year Windows Total Cost Initial Total Cost: $15,000 Additional Cost $3,000 per year Based on the hypothetical numbers it would make more sense to select windows based web server because the initial investment of the technology is much lower initially compared to the Linux based web server.

    Read the article

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • Prioritize compiler functionality/tasks, when designing a new language

    - by Mahdi
    Well, the question should be so hard to ask and I expect couple of down votes, however, I'm really interested to have your ideas and recommendations. :) I've already made a very simple compiler, with a few and limited functionality. Now I'm getting more on it to make it more like a real-world compiler. I definitely need to start over 'cause I've much more experience and ideas in this area rather a few years ago. So, I want to know, right now, from the very first step again, which tasks/features for the new compiler should implement first and which tasks has lower priority rather than others? For example, I'd say, first I'd go to decide about the object-oriented structure for the new language, but you might say, hey, just go for a compiler that could define a variable, when you finished that, then start thinking about OOP designs ... I prefer to hear the pros and cons for your suggestions also. Actually I like to start from Bottom to Top, where I could add simplest tasks first, and later adding more complex ones, but I'm totally open for any new ideas, and really appreciate that. Also please consider that I'm thinking about the design concepts. Actually I expect answers like: Priority from Highest to Lowest: variables, because .... functions, because .... loops, because .... ... Not: define a syntax for your new language, and start parsing your source code ...

    Read the article

  • Problem Installing Ubuntu 12.04 - [Errno 5]

    - by Rob Barker
    I'm trying to intall Ubuntu 12.04 to my brand new eBay purchased hard drive. Only got the drive today and already it's causing me problems. The seller is a proper proffesional company with 99.9% positive feedback, so it seems unlikely they would have sold me something rubbish. My old hard drive packed up last Tuesday and so i bought a new one to replace it. Because this was an entirely new drive i decided to install Ubuntu as there was no current operating system. My computer is an eMachines EM250 netbook. There's no disc drive so i am installing from a USB stick. The new operating system loads beautifully, and the desktop appears just as it should. When i click install i am taken to the installer which copies the files to about 35% and then displays this: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. The hard drive can be heard constantly crackling. When i booted Ubuntu 12.04 from my old faulty hard drive as a test, i didn't even make it past the purple Ubuntu screen, so it can't be that bad. Any ideas? Message to the moderators. Please do not close this thread. I'm well aware there may be other threads on this, but i don't want it closing as the others do not provide the answer i am looking for. Thank you.

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Oracle University Partner Enablement Update (15th November)

    - by swalker
    Two new OPN Only Boot Camps available The following OPN Only Boot Camps have just become available: 3-day Oracle Exadata 11g Technical Boot Camp: Prepares you for becoming an Oracle Exadata 11g Certified Implementation Specialist Currently scheduled in Germany, UK Available for scheduling in all countries Live Virtual Class dates: 15-17 Feb 12 & 16-18 May 12 5-day Oracle BI Enterprise Edition 11g Implementation Boot Camp Currently scheduled in Sweden Available for scheduling in all countries View the complete OPN Only Boot Camp schedule. Certification News: Java SE 7 Be one of the first to get Java SE 7 certified. The following exams have recently become available for beta testing: Exam Code and Title Certification Track 1Z1-805 Upgrade to Java SE 7 Programmer (Beta until 17-Dec-11 ) Oracle Certified Professional, Java SE 7 Programmer 1Z1-803 Java SE 7 Programmer I (Beta until 17-Dec-11 ) Oracle Certified Associate, Java SE 7 Programmer A beta exam offers you two distinct advantages: you will be one of the first to get certified you pay a lower price. Beta exams can be taken at any Pearson VUE Testing Center. New CoursesOracle University released several new courses recently. Please click here to find out more about the new course titles. Are you looking for insight from the Oracle University experts? Check out these Oracle University Newsletters: Technology Newsletters Applications Newsletters Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook

    Read the article

  • Thick models Vs. Business Logic, Where do you draw the distinction?

    - by TokenMacGuy
    Today I got into a heated debate with another developer at my organization about where and how to add methods to database mapped classes. We use sqlalchemy, and a major part of the existing code base in our database models is little more than a bag of mapped properties with a class name, a nearly mechanical translation from database tables to python objects. In the argument, my position was that that the primary value of using an ORM was that you can attach low level behaviors and algorithms to the mapped classes. Models are classes first, and secondarily persistent (they could be persistent using xml in a filesystem, you don't need to care). His view was that any behavior at all is "business logic", and necessarily belongs anywhere but in the persistent model, which are to be used for database persistence only. I certainly do think that there is a distinction between what is business logic, and should be separated, since it has some isolation from the lower level of how that gets implemented, and domain logic, which I believe is the abstraction provided by the model classes argued about in the previous paragraph, but I'm having a hard time putting my finger on what that is. I have a better sense of what might be the API (which, in our case, is HTTP "ReSTful"), in that users invoke the API with what they want to do, distinct from what they are allowed to do, and how it gets done. tl;dr: What kinds of things can or should go in a method in a mapped class when using an ORM, and what should be left out, to live in another layer of abstraction?

    Read the article

  • Code contracts and inheritance

    - by DigiMortal
    In my last posting about code contracts I introduced you how to force code contracts to classes through interfaces. In this posting I will go step further and I will show you how code contracts work in the case of inherited classes. As a first thing let’s take a look at my interface and code contracts. [ContractClass(typeof(ProductContracts))] public interface IProduct {     int Id { get; set; }     string Name { get; set; }     decimal Weight { get; set; }     decimal Price { get; set; } }   [ContractClassFor(typeof(IProduct))] internal sealed class ProductContracts : IProduct {     private ProductContracts() { }       int IProduct.Id     {         get         {             return default(int);         }         set         {             Contract.Requires(value > 0);         }     }       string IProduct.Name     {         get         {             return default(string);         }         set         {             Contract.Requires(!string.IsNullOrWhiteSpace(value));             Contract.Requires(value.Length <= 25);         }     }       decimal IProduct.Weight     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 3);             Contract.Requires(value < 100);         }     }       decimal IProduct.Price     {         get         {             return default(decimal);         }         set         {             Contract.Requires(value > 0);             Contract.Requires(value < 100);         }     } } And here is the product class that inherits IProduct interface. public class Product : IProduct {     public int Id { get; set; }     public string Name { get; set; }     public virtual decimal Weight { get; set; }     public decimal Price { get; set; } } if we run this code and violate the code contract set to Id we will get ContractException. public class Program {     static void Main(string[] args)     {         var product = new Product();         product.Id = -100;     } }   Now let’s make Product to be abstract class and let’s define new class called Food that adds one more contract to Weight property. public class Food : Product {     public override decimal Weight     {         get         {             return base.Weight;         }         set         {             Contract.Requires(value > 1);             Contract.Requires(value < 10);               base.Weight = value;         }     } } Now we should have the following rules at place for Food: weight must be greater than 1, weight must be greater than 3, weight must be less than 100, weight must be less than 10. Interesting part is what happens when we try to violate the lower and upper limits of Food weight. To see what happens let’s try to violate rules #2 and #4. Just comment one of the last lines out in the following method to test another assignment. public class Program {     static void Main(string[] args)     {         var food = new Food();         food.Weight = 12;         food.Weight = 2;     } } And here are the results as pictures to see where exceptions are thrown. Click on images to see them at original size. Violation of lower limit. Violation of upper limit. As you can see for both violations we get ContractException like expected. Code contracts inheritance is powerful and at same time dangerous feature. Although you can always narrow down the conditions that come from more general classes it is possible to define impossible or conflicting contracts at different points in inheritance hierarchy.

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >