Search Results

Search found 7216 results on 289 pages for 'low cost'.

Page 206/289 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • 2010 is gone and Welcome 2011

    - by anirudha
    last days i spent my week @ firozabadthe town is much small and near to agraso i never forget to see the taj mahal and red fort their even it’s first chance to see them.i make a plan that i go to Agra last Saturday. firstly i go to red fort and i talking with many foreigner and they love to talking with me because their is only one man who with with them who is their GUIDE a person like a  book they never can talk with you but tell you about everything of the location because you buy them. their are many person come from various country such as German , Japan,  Russ , Italy and many other. their is no problem to talk with them perhaps they happen with talk to me. when i completely watch the Red fort at least i see a girl who are look like a foreigner. i talk themselves where they come from they tell me Francewhen i go elsewhere i thing to propose them to be  a friend of mine. i never propose any girl for friendship with me even in school and college. so i propose them to be a friend of mine.  they accept it i put the email ID in their hand whenever they gone. but i still not get their mail. 2ndly i go to Taj mahal the taj experience is not so good i spent 3 or 4 hours in rush. i found their is no security even their are many army force. they all person are too slow to work. they spent 10 minute to check  a person for security . their hands work very slow just like a low configuration computer. i talk many person their too. i talk to a person who tell themselves Jacob and they from Chicago. they speak very fast and i not know what they tell in speech. a another problem i got with some Chinese person. when i talking with them that i found they speak only Chinese language. Wish you a very very happy new year.

    Read the article

  • What does the `dmesg` error: "composite sync not supported" mean?

    - by M. Tibbits
    Question: I see [ 20.473125] composite sync not supported and several such entries when I run dmesg. What do they mean? Background: I'm trying to debug a problem where my laptop won't suspend. Since acpi seems happy and I can suspend easily from the command line, I've turned to tracking down all boot-up errors/warnings. So I run dmesg | grep not and, amongst other shtuff, I get: 728:[ 17.267120] composite sync not supported 733:[ 18.009061] composite sync not supported 740:[ 18.159289] registered panic notifier 749:[ 18.162500] vga16fb: not registering due to another framebuffer present 757:[ 18.598251] composite sync not supported 776:[ 20.473125] composite sync not supported 777:[ 20.932266] composite sync not supported 778:[ 28.350231] composite sync not supported 779:[ 28.924913] composite sync not supported 780:[ 35.480658] composite sync not supported And the full log for the few lines right around that first appearance (line 728) is listed at the bottom of my post (I'd happily include anything else). Any ideas what could be causing this? I've read several sites: Ubuntuforums #1 IRC Chat #1 One post talks about ??Adobe flash?? causing this error? Some others also suggest that it might be an nvidia related problem, but I've got a Dell Latitude D630 with an integrated Intel graphics -- so nvidia isn't the problem. [ 17.207142] phy0: Selected rate control algorithm 'minstrel' [ 17.207833] Registered led device: b43-phy0::tx [ 17.207849] Registered led device: b43-phy0::rx [ 17.207865] Registered led device: b43-phy0::radio [ 17.207927] Broadcom 43xx driver loaded [ Features: PL, Firmware-ID: FW13 ] [ 17.267120] composite sync not supported [ 17.415795] EXT4-fs (sda2): mounted filesystem with ordered data mode [ 17.602131] [drm] initialized overlay support [ 17.620201] input: DualPoint Stick as /devices/platform/i8042/serio1/input/input7 [ 17.641192] input: AlpsPS/2 ALPS DualPoint TouchPad as /devices/platform/i8042/serio1/input/input8 [ 18.009061] composite sync not supported [ 18.106042] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3af: clean. [ 18.108115] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x3e0-0x4ff: clean. [ 18.108941] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x820-0x8ff: clean. [ 18.109676] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcf7: clean. [ 18.110356] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean. [ 18.159286] fb0: inteldrmfb frame buffer device [ 18.159289] registered panic notifier [ 18.160218] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:01/input/input9 [ 18.160286] ACPI: Video Device [VID1] (multi-head: yes rom: no post: no) [ 18.160334] ACPI Warning for \_SB_.PCI0.VID2._DOD: Return Package has no elements (empty) (20090903/nspredef-433) [ 18.160432] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:02/input/input10 [ 18.160491] ACPI: Video Device [VID2] (multi-head: yes rom: no post: no) [ 18.160539] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 18.162494] vga16fb: initializing [ 18.162497] vga16fb: mapped to 0xc00a0000 [ 18.162500] vga16fb: not registering due to another framebuffer present [ 18.176091] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 18.176123] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 18.285752] input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/input/input11 [ 18.312497] input: HDA Intel Mic at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 18.312586] input: HDA Intel HP Out at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13 [ 18.328043] usbcore: registered new interface driver ndiswrapper [ 18.460909] Console: switching to colour frame buffer device 180x56 [ 18.598251] composite sync not supported

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • Brightness Crash and Fan issues in 12.04.1

    - by S.A. McIntosh
    I would just like to state beforehand that I am a total novice in using Ubuntu when it comes to the more complex issues. So I thought it would be best to finally come here and ask for help before being re-directed or closed out for a solution. I have already looked high and low on this board for one but nothing came up for my particular case, so I might as well take a shot asking for the first time here. This is what I have at the moment: -Dell Insprion 1764 w/ 64-bit Intel i5 Core -Dual Boot: Windows 7/Ubuntu 12.04.1 32-bit (from 12.04 install) -Unity shell -Linux kernel: 3.2.0-32 generic-pae ...and this is my fglrxinfo: OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 5000 Series OpenGL version string: 4.2.11627 Compatibility Profile Context The one issue I have with using Ubuntu is brightness. With the driver in every time I use the slider in the brightness and lock settings or use the keyboard function, it freezes, goes black and comes up with a scrambled colors page like this in the video. So I have looked all over this board and the web for answers looking for a solution that might have an answer. This is what I have done so far to fix this: -First Solution: Looking around, I found this small fix using terminal: sudo gedit /etc/rc.local followed by adding this into "rc.local" echo # > /sys/class/backlight/acpi_video0/brightness This works rarely with the graphics driver still in and I often get lucky say during restart but reboot would only snap back the brightness at max. -Second Solution Simply remove the graphics driver while leaving the solution of first behind. This solves the issue but results in having the monitor flicker and flash at startup which in itself is not a problem to me but maybe not so good for monitor health. Also it causes the fan to speed up throughout the session and render any program that needs the driver useless. -Third Solution This is the most obvious. Just simply use the brightness on AMD Catalyst Control Center software that came with the driver, and I can say that it's form of brightness is HORRIBLE compared to the actual settings. Which leads up to where I am now, back to the driver to stop the fan speed-up and seems that the only solution to the brightness crash is to use the keyboard-controlled brightness at the login screen NOT the desktop if I want the issued effect but will just snap at max bright again if I restart. Fan speed problem is dealt with but now run the risk of crashing my computer if I so much touch the brightness settings. Speaking of which I found this on launchpad and it seems that the issue has been going far since June of 2012. Any help, redirect link or reference would be greatly appreciated. Thank you.

    Read the article

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

  • Rights and use of developed software

    - by Nils Munch
    I have been working on a piece of software for a company, that they wish to resell. There was an mail-based agreement upon a flat hourly rate for my work, and eager me chose to accept a rather low fee. Due to the stress and tempo of the task, a direct contract was never formed or signed. The software was developed locally on my machine, and I was pretty much alone with it, except by excellent help from StackOverflow when I got stuck. Now, the software is nearing completion, I suddenly hear that they have hired a new developer to make the same piece of software as me, and that I was expected to resign within long. Confused I ask around, and realize that the CEO of the company had informed the rest of the company that I was terminally ill and had cancer, and was expected to leave the company soon. Since I'm perfectly healthy, this confused me even more, until I realized what was going on. When I confronted my boss with this, I was no longer seen as a member of the company, and I left the same day, never to return. Later, I raised the question about my missing pay, since I had been working for quite a bit, and not received any payment for my software. I saw that they had already sold a fair copy of my software, and since it's not exactly sold cheap, the company should have plenty of gold to pay me. The company refused, and said that they owned the software, and everything it contained. That was a lot of drama, but my question is this: Who has the rights to the software ? The source code had my personal watermarks and copyrights inprinted, but they have since simply deleted it. The company claim that they have all the rights, because they have a website made about the product, where they write that they have "All rights reserved" in the bottom. My instinct tells me that if a company buys a service like this, and then refuses to pay their developer, then they should not be allowed to keep, and much less resell the product. I have not signed any agreements about giving the company the use of this product, I have made it in my own time and without help from the rest of the company. This all takes place in Denmark, Europe, but I would guess that the rules about this is somewhat universal. Im not the strongest person to legal-talk, so I might be wrong.

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • ADF Code Guidelines

    - by Chris Muir
    During Oracle Open World 2012 the ADF Product Management team announced a new OTN website, the ADF Architecture Square.  While OOW represents a great opportunity to let customers know about new and exciting developments, the problem with making announcements during OOW however is customers are bombarded with so many messages that it's easy to miss something important. So in this blog post I'd like to highlight as part of the ADF Architecture Square website, one of the initial core offerings is a new document entitled ADF Code Guidelines. Now the title of this document should hopefully make it obvious what the document contains, but what's the purpose of the document, why did Oracle create it? Personally having worked as an ADF consultant before joining Oracle, one thing I noted amongst ADF customers who had successfully deployed production systems, that they all approached software development in a professional and engineered way, and all of these customers had their own guideline documents on ADF best practices, conventions and recommendations.  These documents designed to be consumed by their own staff to ensure ADF applications were "built right", typically sourced their guidelines from their team's own expert learnings, and the huge amount of ADF technical collateral that is publicly available.  Maybe from manuals and whitepapers, presentations and blog posts, some written by Oracle and some written by independent sources. Now this is all good and well for the teams that have gone through this effort, gathering all the information and putting it into structured documents, kudos to them.  But for new customers who want to break into the ADF space, who have project pressures to deliver ADF solutions without necessarily working on assembling best practices, creating such a document is understandably (regrettably?) a low priority.  So in recognising this hurdle, at Oracle we've devised the ADF Code Guidelines.  This document sets out ADF code guidelines, practices and conventions for applications built using ADF Business Components and ADF Faces Rich Client (release 11g and greater).  The guidelines are summarized from a number of Oracle documents and other 3rd party collateral, with the goal of giving developers and development teams a short circuit on producing their own best practices collateral. The document is not a final production, but a living document that will be extended to cover new information as discovered or as the ADF framework changes. Readers are encouraged to discuss the guidelines on the ADF EMG and provide constructive feedback to me (Chris Muir) via the ADF EMG Issue Tracker. We hope you'll find the ADF Code Guidelines useful and look forward to providing updates in the near future. Image courtesy of paytai / FreeDigitalPhotos.net

    Read the article

  • Does Your Customer Engagement Create an Ah Feeling?

    - by Richard Lefebvre
    An (Oracle CX Blog) article by Christina McKeon Companies that successfully engage customers all have one thing in common. They make it seem easy for the customer to get what they need. No one would argue that brands don’t want to leave customers with this “ah” feeling. Since 94% of customers who have a low-effort service experience will buy from that company again, it makes financial sense for brands.1 Some brands are thinking differently about how they engage their customers to create ah feelings. How do they do it? Toyota is a great example of using smart assistance technology to understand customer intent and answer questions before customers hit the submit button online. What is unique in this situation is that Toyota captures intent while customers are filling out email forms. Toyota analyzes the data in the form and suggests responses before the customer sends the email. The customer gets the right answer, and the email never makes it to your contact center — which makes you and the customer happy. Most brands are fully aware of chat as a service channel, but some brands take chat to a whole new level. Beauty.com, part of the drugstore.com and Walgreens family of brands, uses live chat to replicate the personal experience that one would find at high-end department store cosmetic counters. Trained beauty advisors, all with esthetician or beauty counter experience, engage in live chat sessions with online shoppers to share immediate advice on the best products for their personal needs. Agents can watch customer activity online and determine the right time to reach out and offer help, just as help would be offered in a brick-and-mortar store. And, agents can co-browse along with the customer helping customers with online check-out. These personal chat discussions also give Beauty.com the opportunity to present products, advertise promotions, and resolve customer issues when they arise. Beauty.com converts approximately 25% of chat sessions into product orders. Photobox, the European market leader in online photo services, wanted to deliver personal and responsive service to its 24 million members. It ensures customer inquiries on personalized photo products are routed based on agent knowledge so customers get what they need from the company experts. By using a queuing system to ensure that the agent with the most appropriate knowledge handles the query, agent productivity increased while response times to 1,500 customer queries per day decreased. A real-time dashboard prevents agents from being overloaded with queries. This approach has produced financial results with a 15% increase in sales to existing customers and a 45% increase in orders from newly referred customers.

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Interview with Koen Aben, Supply Chain Director of WE Fashion

    - by user801960
    We recently spoke to Koen Aben, the Supply Chain Director of WE Fashion, who gave us some insight into how Oracle supported the international fashion retailer through the completion of a large scale integration project across its 340 European stores. Koen explains the reasoning behind the project which was to create a common retail foundation and to integrate and align working processes to drive insight and enable continued growth. It is always good to hear from someone of Koen’s experience who can articulate the benefits of partnering with the right company for such an extensive project as this. Koen explains that a crucial element of such a project is to unify business applications into a common platform, adding that for successful growth, retailers really need to achieve enterprise-wide alignment. At the start of the three year project, WE Fashion’s application platform was fragmented impacting the company’s ability to support sustained growth. In light of this, WE Fashion invested in its processes, systems, teams and partnerships to build the needed retail foundation. Now after successfully completing the project, the basis is in place to ensure that growth is unimpeded. In the video, Koen Aben highlights some of the factors necessary for the success of the project as: Having an understanding that the process of creating a growth platform for a company is a long journey Accepting that during a lengthy project such as this, there will be high and low points experienced within the project team and the business, but that the relationship with your partners is crucial to the success of the project. Having the correct team in place will prove to be the “lynch –pin” of any successful project Oracle supported Koen and his team in implementing this project, and is recognised for the role it played during this development in partnership with the company. On his experience with working with the Oracle team, Koen points out that in the critical situations, Oracle was there to ensure that the right people were in place whenever needed and this was key to ensuring the project’s success. Since Oracle is one of the few providers that can offer an enterprise-wide retail platform, our best practice approach is key to connecting interactions throughout the business to enable insight and optimise operations. This is a great example of a large scale international retail project, where the true success of its completion is reflected in how proud the company is about what has been achieved, and the fact that results are already being seen.

    Read the article

  • How to recover from finite-state-machine breakdown?

    - by Earl Grey
    My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input - trigger and its sender analyzed - output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event-chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this "I added three documents, and after going there and there, i cannot open them, even if a see them." or "I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI.

    Read the article

  • Producing a smooth mesh from density cloud and marching cubes

    - by Wardy
    Based on my results from this question I decided to build myself a 3D noise map containing float values in place of my existing boolean point values. The effect I'm trying to produce is something like this, rather than typical rolling hills; which should explain the "missing cubes" in the image below. If I render my density map in normal "minecraft mode" (1 block per point in the density map) varying the size of the cube based on the value in my density map (floats in the range 0 to 1) I get something like this: I'm now happy that I can produce a density map for the marching cubes algorithm (which will need a little tweaking) but for some reason when I run it through my implementation it's not producing what I expect. My problem is that I'm getting something like the first image in this answer to my previous question, when I want to achieve the effect in the second image. Upon further investigation I can't see how marching cubes does the "move vertex along the edge" type logic (i.e. the difference between the two images on my previous link). I see that it does do some interpolation, but I'm not convinced I have the correct understanding of what I think it should do, because the code in question appears to give the same result regardless of whether I use boolean or float values. I took the code from here which is a C# implementation of marching cubes, but instead of using the MarchingCubesPrimitive I modified it to accept an object of type IDrawable, containing lists for the various collections (vertices, normals, UVs, indices), the logic was otherwise untouched. My understanding is that given a very low isovalue the accuracy level of the surface being rendered should increase, so in short "less 45 degree slows more rolling hills" type mesh output. However this isn't what I'm seeing. Have I missed something or is the implementation flawed and need to be fixed? EDIT: A little more detail on what I am seeing when I "marching cube" the data. Ok so firstly, ignore the fact that the meshes created by the chunks don't "connect" (i'll probably raise another question about this later). Then look at the shaping of the island, it's too ... square, from the voxels rendered as boxes you get the impression there's a clean soft gradual hill and yet from the image there are sharp falling edges even in the most central areas where the gradient in the first image looks the most smooth. The data is "regenerated" each time I run this so no 2 islands come out the same, and it's purely random so not based on noise, but still, how can it look so smooth in 1 image and so not smooth in the other?

    Read the article

  • Three Global Telecoms Soar With Siebel

    - by michael.seback
    Deutsche Telekom Group Selects Oracle's Siebel CRM to Underpin Next-Generation CRM Strategy The Deutsche Telekom Group (DTAG), one of the world's leading telecommunications companies, and a customer of Oracle since 2001, has invested in Oracle's Siebel CRM as the standard platform for its Next Generation CRM strategy; a move to lower the cost of managing its 120 million customers across its European businesses. Oracle's Siebel CRM is planned to be deployed in Germany and all of the company's European business within five years. "...Our Next-Generation strategy is a significant move to lower our operating costs and enhance customer service for all our European customers. Not only is Oracle underpinning this strategy, but is also shaping the way our company operates and sells to customers. We look forward to working with Oracle over the coming years as the technology is extended across Europe," said Dr. Steffen Roehn, CIO Deutsche Telekom AG... "The telecommunications industry is currently undergoing some major changes. As a result, companies like Deutsche Telekom are needing to be more intelligent about the way they use technology, particularly when it comes to customer service. Deutsche Telekom is a great example of how organisations can use CRM to not just improve services, but also drive more commercial opportunities through the ability to offer highly tailored offers, while the customer is engaged online or on the phone," said Steve Fearon, vice president CRM, EMEA Read more. Telecom Argentina S.A. Accelerates Time-to-Market for New Communications Products and Services Telecom Argentina S.A. offers basic telephone, urban landline, and national and international long-distance services...."With Oracle's Siebel CRM and Oracle Communication Billing and Revenue Management, we started a technological transformation that allows us to satisfy our critical business needs, such as improving customer service and quickly launching new phone and internet products and services." - Saba Gooley, Chief Information Officer, Wire Line and Internet Services, Telecom Argentina S.A.Read more. Türk Telekom Develops Benefits-Driven CRM Roadmap Türk Telekom Group provides integrated telecommunication services from public switched telephone network (PSTN) and global systems for mobile communications technology (GSM). to broadband internet...."Oracle Insight provided us with a structured deployment approach that makes sense for our business. It quantified the benefits of the CRM solution allowing us to engage with the relevant business owners; essential for a successful transformation program." - Paul Taylor, VP Commercial Transformation, Türk Telekom Read more.

    Read the article

  • ray collision with rectangle and floating point accuracy

    - by phq
    I'm trying to solve a problem with a ray bouncing on a box. Actually it is a sphere but for simplicity the box dimensions are expanded by the sphere radius when doing the collision test making the sphere a single ray. It is done by projecting the ray onto all faces of the box and pick the one that is closest. However because I'm using floating point variables I fear that the projected point onto the surface might be interpreted as being below in the next iteration, also I will later allow the sphere to move which might make that scenario more likely. Also the bounce coefficient might be as low as zero, making the sphere continue along the surface. So my naive solution is to project not only forwards but backwards to catch those cases. That is where I got into problems shown in the figure: In the first iteration the first black arrow is calculated and we end up at a point on the surface of the box. In the second iteration the "back projection" hits the other surface making the second black arrow bounce on the wrong surface. If there are several boxes close to each other this has further consequences making the sphere fall through them all. So my main question is how to handle possible floating point accuracy when placing the sphere on the box surface so it does not fall through. In writing this question I got the idea to have a threshold to only accept back projections a certain amount much smaller than the box but larger than the possible accuracy limitation, this would only cause the "false" back projection when the sphere hit the box on an edge which would appear naturally. To clarify my original approach, the arrows shown in the image is not only the path the sphere travels but is also representing a single time step in the simulation. In reality the time step is much smaller about 0.05 of the box size. The path traveled is projected onto possible sides to avoid traveling past a thinner object at higher speeds. In normal situations the floating point accuracy is not an issue but there are two situations where I have the concern. When the new position at the end of the time step is located very close to the surface, very unlikely though. When using a bounce factor of 0, here it happens every time the sphere hit a box. To add some loss of accuracy, the motivation for my concern, is that the sphere and box are in different coordinate systems and thus the sphere location is transformed for every test. This last one is why I'm not willing to stand on luck that one floating point value lying on top of the box always will be interpreted the same. I did not know voronoi regions by name, but looking at it I'm not sure how it would be used in a projection scenario that I'm using here.

    Read the article

  • Wifi disabled for Intel Centrino Wireless-N 1000 Intel in 12.04

    - by new_bie
    Laptop model - HP- dm4 - 2070. I had faced the same problem for wireless being disabled in case of 11.10. It had to do with the new kernel. I thought with 12.04 this problem will be handled but the problem persists. Is there no way to get the wireless working except for the way mentioned in the following link ?? Wifi for Centrino Wireless-N 1000 Intel Corporation (HP pavillion dm4 - 2070us) is not working Output for sudo lshw -class network *-network UNCLAIMED description: Network controller product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress cap_list configuration: latency=0 resources: memory:c2500000-c2501fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:08:00.0 logical name: eth0 version: c0 serial: 2c:41:38:07:f3:e3 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.116 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:43 memory:c1400000-c143ffff ioport:2000(size=128) Output for dmesg | grep iwl [ 14.742886] iwlwifi 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 14.742897] iwlwifi 0000:01:00.0: setting latency timer to 64 [ 14.743013] iwlwifi 0000:01:00.0: pci_resource_len = 0x00002000 [ 14.743016] iwlwifi 0000:01:00.0: pci_resource_base = ffffc90000c78000 [ 14.743018] iwlwifi 0000:01:00.0: HW Revision ID = 0x0 [ 14.743119] iwlwifi 0000:01:00.0: irq 42 for MSI/MSI-X [ 14.743161] iwlwifi 0000:01:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [ 14.743229] iwlwifi 0000:01:00.0: L1 Enabled; Disabling L0S [ 14.765147] iwlwifi 0000:01:00.0: device EEPROM VER=0x15d, CALIB=0x6 [ 14.765151] iwlwifi 0000:01:00.0: Device SKU: 0X50 [ 14.765154] iwlwifi 0000:01:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [ 14.765907] iwlwifi 0000:01:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [ 14.912840] iwlwifi 0000:01:00.0: request for firmware file 'iwlwifi-1000-5.ucode' failed. [ 14.914254] iwlwifi 0000:01:00.0: request for firmware file 'iwlwifi-1000-4.ucode' failed. [ 14.915718] iwlwifi 0000:01:00.0: request for firmware file 'iwlwifi-1000-3.ucode' failed. [ 14.916986] iwlwifi 0000:01:00.0: request for firmware file 'iwlwifi-1000-2.ucode' failed. [ 14.919391] iwlwifi 0000:01:00.0: request for firmware file 'iwlwifi-1000-1.ucode' failed. [ 14.919445] iwlwifi 0000:01:00.0: no suitable firmware found! [ 14.919783] iwlwifi 0000:01:00.0: PCI INT A disabled [ 2868.960807] Modules linked in: snd_hda_codec_hdmi snd_hda_codec_idt rfcomm bnep bluetooth parport_pc ppdev binfmt_misc hid_logitech_dj usbhid hid joydev snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq hp_wmi sparse_keymap hp_accel lis3lv02d input_polldev snd_timer snd_seq_device wmi iwlwifi snd mac80211 i915 cfg80211 rts_pstor(C) drm_kms_helper drm uvcvideo videodev psmouse soundcore mei(C) v4l2_compat_ioctl32 mac_hid serio_raw snd_page_alloc i2c_algo_bit video lp parport atl1c

    Read the article

  • EBS 11i and 12.1 Support Timeline Changes

    - by Steven Chan (Oracle Development)
    Two important changes to the Oracle Lifetime Support policies for Oracle E-Business Suite were announced at OpenWorld last week.  These changes affect EBS Releases 11i and 12.1. The changes are detailed in this My Oracle Support document: E-Business Suite 11.5.10 Sustaining Support Exception & 12.1 Extended Support Now to Dec. 2018 (Note 1495337.1) 1. Changes for EBS 11i Sustaining Support The first change is that  we will be providing an exception for the first 13 months of Sustaining Support on Oracle E-Business Suite Release 11.5.10 (11i10), valid from December 1, 2013 – December 31, 2014. This exception support will be comprised of three components: New fixes for Severity 1 production issues United States Form 1099 2013 year-end updates Payroll regulatory updates for the United States, Canada, United Kingdom, and Australia for fiscal years ending in 2014 Customers environments must have the minimum baseline patches (or above) for new Severity 1 production bug fixes as documented here: Patch Requirements for Extended Support of Oracle E-Business Suite Release 11.5.10 (Note 883202.1) 2. Changes for EBS 12.1 Extended Support More time:  Extended Support period for E-Business Suite Release 12.1 has been extended by nineteen months through December, 2018. Customers with an active Oracle Premier Support for Software contract will automatically be entitled to Extended Support for E-Business Suite 12.1. Fees waived:  Uplift fees are waived for all years of Extended Support (June, 2014 – December. 2018) for customers with an active Oracle Premier Support for Software contract. During this period, customers will receive all of the components of Extended Support at no additional cost other than their fees for Software Update License & Support. Where can I learn more? There are two interlocking policies that affect the E-Business Suite:  Oracle's Lifetime Support policies for each EBS release (timelines which were updated by this announcement), and the Error Correction Support policies (which state the minimum baselines for new patches). For more information about how these policies interact, see: Understanding Support Windows for E-Business Suite Releases What about E-Business Suite technology stack components?Things get more complicated when one considers individual techstack components such as Oracle Forms or the Oracle Database.  To learn more about the interlocking EBS+techstack component support windows, see these two articles: On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support: A Primer for E-Business Suite Users Related Articles Extended Support Fees Waived for E-Business Suite 11i and 12.0 EBS 12.0 Minimum Requirements for Extended Support Finalized

    Read the article

  • PMI South Florida Job Fair 2010

    - by Sam Abraham
    The South Florida Chapter of the Project Management Institute is planning a Job Fair slated for September 2010. This year has seen a significant improvement in the job market with many surveyed companies indicating their intention to add temporary or permanent staff to their workforce in the near future.   The Job Fair Initiative fits well within the chapter's message and goal for this year: "Exercising Social Responsibility" - Our responsibility as PMI volunteers at all levels towards our members and surrounding community.   Our Free-to-members Annual Job Fair will play an important role in connecting Recruiters, Exhibitors and Job Seekers together thereby helping hiring companies gain access to a large talent pool at an affordable cost (Totally free in certain cases, details to be revealed once finalized) while giving job seekers centralized access to many reputable hiring companies in the South Florida area.   My involvement in the 2010 Job Fair started with a good conversation I had with Bernie Saenz, President and CEO of the South Florida PMI Chapter, in a networking event a few months ago. I had approached him with a few ideas in line with his goal to serve the community and our members given today's difficult economic climate. Bernie indicated that the Project Manager for the 2010 Job Fair had just been appointed and invited me to participate in this important initiative as a member of her team. I simply couldn't resist and gladly accepted the invitation.   I chose an initial role as Recruiter Relations Lead which entails developing documentation and timelines for our project plan with regards to Recruiter Engagement as well as reaching out to recruiting companies to meet target representation at the Job Fair.   Being heavily involved in the local Technical community has afforded me the privilege of coming in contact with many reputable Technology Recruiting companies. (As a matter of fact, I already have 2 interested very reputable IT recruiting firms willing to join us at the fair)   The excitement for me however will be finding and reaching out to recruiters in areas of Project Management and Leadership that I might not have been exposed to before including Finance, Healthcare and Marketing, to name a few.   Keep an eye in the upcoming few weeks for official announcements on the PMI South Florida Job Fair 2010.   Environment.Exit(0);   -Sam Abraham Site Director - West Palm Beach .Net User Group Recruiter Relations Lead - PMI South Florida Job Fair 2010 Project Lead - Mentoring Programs- PMI South Florida

    Read the article

  • BI&EPM in Focus April 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Oracle OpenWorld call for papers now open, now through April 9 (link) Oracle Announces Availability of Oracle Exalytics In-Memory Machine (link) Oracle EPM and BI Support Newsletter Current Edition - Volume 3 : March 2012 (link) Customers Asiana Airlines Improves Passenger Management with Near-Real-Time Reservation and Ticketing Information  Centraal Boekhuis Delivers Faster with Oracle BI 11g Essatto Software Speeds Data Aggregation Tenfold; Integrates BI, Performance Management, and Data Warehousing for Midsize Businesses Grupo WTorre Supports Management's Decision-Making with OBIEE, Ensuring Uniform, Reliable, and Consistent Data Indian Overseas Bank Cuts Planning Schedule by 45 Worker Days per Year, Assesses Market Risk Instantly with Business Intelligence System Kentucky Community and Technical College System Enables Data-Driven Decision-Making Using Integrated System with Management Dashboards National Australia Bank Achieves 200% ROI, Improves Data Quality and Reporting Integrity with Oracle Hyperion DRM R.L. Polk & Co. Enhances Business Intelligence Capabilities, Optimizes System Performance with Extreme Analytics Machine Test ResCare, Inc. Transforms Reporting to Improve Healthcare Service Performance with Oracle Business Analytics  Rochester City School District Uses OBIEE to Track Student Achievement, Identify Areas for Improvement, Accelerate Reporting  Société Générale Standardizes, Accelerates, and Improves Budget Planning Accuracy across Global Enterprise The State Accounting Office of Georgia Integrates Financial Information, Shortens Financial Closings and Streamlines Reporting across 175 Organizations   Events 4-day Oracle Real-Time Decisions Hands-on Technical Workshop for Partners (PTS, Free) May 14-17, 2012: Colombes, Paris, France Nordic events : “Latest Release of Oracle Hyperion EPM and BI Suites Helps Organizations Plan through Uncertainty, Improve Decision-Making and Meet Regulatory Requirements” (April 17, Sweden | April 18, Norway | April 19, Denmark | April 24, Finland) Webcast Replay from Balaji Yelamanchili and Paul Rodwick: “Analytics Without Limits - The Latest on Oracle Exalytics In-Memory Machine and Oracle Business Intelligence”  (link)  Wednesday, April 04, 2012: Business Analytics launch webcast: Invite your customers to register (link) Big Data Online Forum now available on Demand (link)  Enterprise Performance Management Webcast Replay: Accurate Forecasting within the Business Planning Cycle (link) Oracle Hyperion Profitability and Cost Management (HPCM) Master Support Note (link) Business  Intelligence Whitepaper: Driving Innovation Through Analytics (link) Gartner: CIOs Identify BI as the No. 1 Technology Priority for 2012 (link) Webcast Replay: Exalytics in Action: Airlines, US Census and Federal Spending Demo Applications  (link) NEWLY RELEASED Walk-in Video for Exalytics - Use This to Start Customer/Partner Meetings! (link) IDC Insight Paper: “Oracle's All-Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages” (link) System Requirements and Supported Platforms for Oracle Business Intelligence Suite Enterprise Edition 11gR1 Certification Matrix now published to include OBIEE 11.1.1.6.0 (link) Maintenance Release Guide (List of Bugs Fixed) for Oracle Business Intelligence Enterprise Edition (OBIEE) 11.1.1.6.0  (link) OBIEE 11.1.1.6: Is OBIEE 11.1.1.6 Certified With OBI Apps 7.9.6.3?  (link) Information Center: Troubleshooting Oracle Business Intelligence Applications (support login req'd)  (link)      

    Read the article

  • Time passage arithmetic explanation

    - by Cyber Axe
    I ported this from http://www.effectgames.com/effect/article.psp.html/joe/Old_School_Color_Cycling_with_HTML5 some time ago. However i'm now wanting to modify it for the purpose of changing it from floating point to fixed point maths for enhanced efficiency (for those who are going to talk about premature optimization and what not, i want to have my entire engine in fixed point both as a learning process for me and so i can port code more easily to systems in the future that dont have native floating points such as arm cpus) My initial conversion to fixed points just resulted in the cycling stuck on either the first or last frame of cycling. Plus it would be nice to understand better how it works so i can add more options and so forth in the future, my maths however sucks and the comments are limited so i don't really know how the maths work for determining the frame it shoud use (cycleAmount) I was also a beginner when i ported it as i had no idea between floating points and integers and what not. So in summary my question is, can anyone give an explination of the arithmatic used for determining the cycleAmount (which determings the "frame" of the cycle) This is the working floating point maths version of the code: public final void cycle(Colour[] sourceColours, double timeNow, double speedAdjust) { // Cycle all animated colour ranges in palette based on timestamp. sourceColours = sourceColours.clone(); int cycleSize; double cycleRate; double cycleAmount; Cycle cycle; for (int i = 0, len = cycles.length; i < len; ++i) { cycle = cycles[i]; cycleSize = (cycle.HIGH - cycle.LOW) + 1; cycleRate = cycle.RATE / (int) (CYCLE_SPEED / speedAdjust); cycleAmount = 0; if (cycle.REVERSE < 3) { // Standard Cycle cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); if (cycle.REVERSE < 1) { cycleAmount = cycleSize - cycleAmount; // If below 1 make sure its not reversed. } } else if (cycle.REVERSE == 3) { // Ping-Pong cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize << 1); if (cycleAmount >= cycleSize) { cycleAmount = (cycleSize * 2) - cycleAmount; } } else if (cycle.REVERSE < 6) { // Sine Wave cycleAmount = DFLOAT_MOD((timeNow / (1000 / cycleRate)), cycleSize); cycleAmount = Math.sin((cycleAmount * 3.1415926 * 2) / cycleSize) + 1; if (cycle.REVERSE == 4) { cycleAmount *= (cycleSize / 4); } else if (cycle.REVERSE == 5) { cycleAmount *= (cycleSize >> 1); } } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } if (USE_BLEND_SHIFT) { blendShiftColours(sourceColours, cycle, cycleAmount); } else { shiftColours(sourceColours, cycle, cycleAmount); } if (cycle.REVERSE == 2) { reverseColours(sourceColours, cycle); } } colours = sourceColours; } // This utility function allows for variable precision floating point modulus. private double DFLOAT_MOD(final double d, final double b) { return (Math.floor(d * PRECISION) % Math.floor(b * PRECISION)) / PRECISION; }

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Memory Glutton

    - by AreYouSerious
    I have to admit that I can't get enough storage. I have hard drives just sitting around in case I need to move somthing, or I'm going to a friends and either they want something I have or I want something they might have. What I'm going to talk about today is cost effective memory for devices. I don't know how this particualr device will work in a camera, as That's not what I use in my camera, in fact I don't have a camera that doesn't either use SD, or the old compact flash card, that's not so compact anymore. There's this thing that uses two micro sd cards to double the capacity of your memory, and it costs about 4 bucks, without the Micro SD card. I have had one for about a year and was going to throw it away because I couldn't get it to work with my computer, or with my Sony Reader. However I found out by one last ditch effort that this thing works beautifully with my Sony PSP. there is no software to speak of associated with this thing, you simply put in two SD cards of the same size... (if you put in two different sizes it will still work, you'll only double the smallest cards size though) and format through the psp. Viola you know have a 29 GB memory card for your PSP. why is this important ? well for starters you can carry more music and more videos. Second if you have gone the way of the hacker.... you can store more games on your card... There are just a few things you have to note.... I speak from experience... you have to use the usb connection to the PSP to do any file moving, as I said previously said card doesn't play well with my computers or card readers... I not saying it won't work at all, just hasn't work with anything I own. Second. If for some reason you try to Hack/crack your PSP don't attempt to delete a game from the psp, use the usb file browser to remove games. if you delete from the PSP you are likely to have to move all your files off, reformat and start again... just a couple things I have noticed... if I had done something like that.   anyway, Here's a link.... http://www.photofast-adapter.com/  and if you want to buy one, get it off ebay, I've seen them as low as $1.99

    Read the article

  • Ubuntu won't suspend automatically any more

    - by Sparhawk
    In the last month or so, Ubuntu (12.04) has stopped sleeping automatically. I've gone to System Settings Power, and verified (and toggled) "suspend on inactive for" to 5 minutes (for both battery and "when plugged in"), but the system stays awake. I've also tried used code similar to $ gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 300 $ gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-battery-timeout 300 to set the timeout values. I've also verified these in dconf Editor. Previously, I could set this quite low to make my computer sleep quickly, but now it no longer works either. I'm not sure if this is relevant, but under old versions of Ubuntu, if I wanted my computer to never suspend (via the CLI), I would also have to set $ gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac false At some point, this seemed to have been depreciated (and also gave me the error "No such key 'sleep-inactive-ac'"). I found that it it was enough to set sleep-inactive-ac-timeout to 0. This worked for a while, but at some point auto-suspend stopped working as stated above. Oddly enough, the sleep-inactive-ac key is still present when I look via dconf editor. However, when I click it, it says "no schema", and the summary, etc. fields are blank. To test if the dconf power plugin was working, I tried playing around with other settings in the schema. Idle-dim-time and idle-dim-ac work as expected . However, setting sleep-display-ac to 5 seconds has no effect. I'm also not sure if this is relevant, but I've uninstalled gnome-screensaver, and installed xscreensaver. I have tried killing xscreensaver and re-installing gnome-screensaver, but this did not help. I've also had some trouble with DPMS. I'm not sure if this is related, but I'll put the information here, just in case. Using xscreensaver, I set Power Management to enabled, with standby and suspend timeouts to 10 minutes. I've verified these settings in ~/.xscreensaver and xset q. However, the screen blanks after about 30 seconds. If I turn off DPMS (either via xscreensaver GUI or modifying ~/.xscreensaver), it won't blank at all, so I know that DPMS is partially reading the xscreensaver settings. -- edit I've attempted more troubleshooting, by creating a new user account, then logging out of the main account and into the new account. I've tried modifying the timeouts via dconf, but get the same results as above (i.e. it doesn't work, nor does sleep-display-ac, but idle-dim-time and idle-dim-ac work). Also, the depreciated sleep-display-ac key is not visible, so I think that this is probably unrelated. -- edit I've since moved to gnome-shell instead of unity, and still have this problem, so I guess that it's something to do with gnome-power-manager.

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • CIO's Corner: Achieving a Balance

    - by Michelle Kimihira
    Author: Rick Beers Senior Director, Product Management, Oracle Fusion Middleware All too often, a CIO is unfairly characterized as either technology-focused or business-focused; as more concerned with either infrastructure performance or business excellence. It seems to me that this completely misses the point. I have long thought that a CIO has probably the most complex C-level position in an enterprise, one that requires an artful balance among four entirely different constituencies, often with competing values and needs. How a CIO balances these is the single largest determinant of success. I was reminded of this while reading the excellent interview of Mark Hurd by CNBC’s Maria Bartiromo in a recent issue of USATODAY (Bartiromo: Oracle's Hurd is in tech sweet spot). The interview covers topics such as Big Data, Leadership and Oracle’s growth strategy. But the topic that really got my interest, and reminded me of the need for balance, was on IT spending trends, in which Mark Hurd observed, “…budgets are tight. What most of our customers have today is both an austerity plan to save money and at the same time a plan to reapply that money to innovation. There isn't a customer we have that doesn't have an austerity plan and an innovation plan.” In an era of economic uncertainty, and an accelerating pace of business change, this is probably the toughest balance a CIO must achieve. Yet for far too many IT organizations, operating costs consume over 75% of their budgets, leaving precious little for innovation and investment in business-critical technology programs. I have found that many CIO’s are trapped by their enterprise systems platforms, which were originally architected for Standardization, Compliance and tightly integrated linear Workflows. Yes, these traits are still required for specific reasons and cannot be compromised. But they are no longer enough. New demands are emerging: the explosion in the volume and diversity of Data, the Consumerization of IT, the rise of Social Media, and the need for continual Business Process Reengineering. These were simply not the design criteria for Enterprise 1.0 and attempting to leverage them with current systems platforms results in an escalation in complexity and a resulting increase in operating costs for many IT organizations. This is the cost vs investment trap and what most constrains CIO’s from achieving the balance they need. But there is a way out of this trap. Enterprise 2.0 represents an entirely new enterprise systems architecture, one that is ‘Business-Centric’ rather than ‘ERP Centric’, which defined the architecture of Enterprise 1.0. Oracle’s best in class suite of Fusion Middleware Products enables a layered approach to enterprise systems architectures that provides the balance that an enterprise needs. The most exciting part of all this? The bottom two layers are focused upon reducing costs and the upper two layers provide business value and innovation. Finally, the Balance a CIO needs.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >