Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 445/884 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Ubuntu 9.10 + LXDE Error unmet dependencies

    - by skg
    I am working on Ubuntu 9.10 + LXDE as my current project. I am finding it difficult to install any new packages on this OS. For example, I need to install OpenCv package for qt webcam application, but every time it gives unmet dependencies error.And when i try to install dependencies it again show some more dependencies and so on and forms a circular loops. for example: sudo apt-get install libcv-dev .. .... The following packages have unmet dependencies: libcv-dev: Depends: libhighgui-dev (=1.0.0-6.2ubuntu1) but it is not going to installed. E: Broken packages Then I try to install libhighgui-dev as follow: sudo appt-get install libhighgui-dev .. ... The following packages have unmet dependencies: libhighgui-dev: Depends: libgtk2.0-dev but it is not going to installed E: Broken packages And this goes on never ending. Please suggest me something, as i have googled a lot and did not found any help. It may be , i m missing some primary packages to be installed.But how to find it? Any suggestion/help/link will be really appreciated. Thanks in Advance.

    Read the article

  • Gnome3 shell video corruption with ATI Radeon HD 4850 on 11.10 Oneiric

    - by AndyAtTheWebists
    I have a problem similar to what's been mentioned in a few other questions, namely: Ati incompatible with Gnome-shell? Gnome Shell Glitched Top Bar Ubuntu 11.10 I have read a number of other posts here and on other forums and have tried a bunch of different solutions. The Problem The problem manifests itself only in Gnome3. I have tried KDE, Unity and KFCE and all are fine. The graphics corruption is visible only on gnome panels (see images below). Everything works fine with the free ATI drivers, but they just lack power. The problem occurs with the proprietary ones from AMD/ATI. I have installed version 11.11 and 12.1 as per the instructions on wiki.cchtml.com/index.php/Ubuntu_Oneiric_Installation_Guide. I have the same exact problem in both cases. I have tried this on clean installs of Ubuntu 11.10 Oneiric Ocelot (x64 and x86) and on Linux Mint 12 (x64) with the same results. Also, it looks like after some time of not using it, the PC just freezes. Maybe an overheat? Will look into it. Things I've Tried I have tried the following fixes: Different versions of drivers from ATI including the latest - this worked for some, not for me. Installing Oneiric specific package generated from driver, as well as the default install Removed the Unity Global menu Disabled file manager handling desktop Disabled Compiz "detect refresh rate" Sync to VBlank in Compiz on and off Please help! This is the first time in 10 years that I've finally had the chance to switch my primary desktop to Linux (stopped doing .NET dev work), and this is really getting me down. This is what the problem looks like: And:

    Read the article

  • In c-panel mail goes in spam instead of inbox in gmail

    - by Robin Jain
    I have c-panel vps server I have create a domain in the same server but when I sent a mail through webmail to gmail email id it goes into spam. Note--->Mail ip note blacklisted Spf records enable DKIM enable reverse dns are perfect ====================================================================== Email header Information: Delivered-To: [email protected] Received: by 10.143.93.13 with SMTP id v13csp119806wfl; Fri, 6 Jul 2012 08:01:36 -0700 (PDT) Received: by 10.182.52.42 with SMTP id q10mr26133912obo.46.1341586895571; Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Return-Path: <[email protected]> Received: from lakshyacs-u.securehostdns.com ([50.97.147.134]) by mx.google.com with ESMTPS id fx3si18028369obc.144.2012.07.06.08.01.35 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) client-ip=50.97.147.134; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) [email protected] Received: from localhost.localdomain ([127.0.0.1]:39016 helo=harishjoshico.com) by lakshyacs-u.securehostdns.com with esmtpa (Exim 4.77) (envelope-from <[email protected]>) id 1SnA2J-0006Nq-05 for [email protected]; Fri, 06 Jul 2012 20:31:35 +0530 Received: from 223.189.14.213 ([223.189.14.213]) (SquirrelMail authenticated user [email protected]) by harishjoshico.com with HTTP; Fri, 6 Jul 2012 20:31:35 +0530 Message-ID: <[email protected]> Date: Fri, 6 Jul 2012 20:31:35 +0530 Subject: ggglkhl From: [email protected] To: [email protected] User-Agent: SquirrelMail/1.4.22 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - lakshyacs-u.securehostdns.com X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - harishjoshico.com jhkhl ================================================================

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • Get Started with JavaFX 2 and Scene Builder

    - by Janice J. Heiss
    Up on otn/java is a very useful article by Oracle Java/Middleware/Core Tech Engineer Mark Heckler, titled, “How to Get Started (FAST!) with JavaFX 2 and Scene Builder.”  Heckler, who has development experience in numerous environments, shows developers how to develop a JavaFX application using Scene Builder “in less time than it takes to drink a cup of coffee, while learning your way around in the process”. He begins with a warning and a reassurance: “JavaFX is a new paradigm and can seem a bit imposing when you first take a look at it. But remember, JavaFX is easy and fun. Let's give it a try.” Next, after showing readers how to download and install JDK/JavaFX and Scene Builder, he informs the reader that they will “create a simple JavaFX application, create and modify a window using Scene Builder, and successfully test it in under 15 minutes.” Then readers download some NetBeans files:“EasyJavaFX.java contains the main application class. We won't do anything with this class for our example, as its primary purpose in life is to load the window definition code contained in the FXML file and then show the main stage/scene. You'll keep the JavaFX terms straight with ease if you relate them to the theater: a platform holds a stage, which contains scenes. SampleController.java is our controller class that provides the ‘brains’ behind the graphical interface. If you open the SampleController, you'll see that it includes a property and a method tagged with @FXML. This tag enables the integration of the visual controls and elements you define using Scene Builder, which are stored in an FXML (FX Markup Language) file. Sample.fxml is the definition file for our sample window. You can right-click and Edit the filename in the tree to view the underlying FXML -- and you may need to do that if you change filenames or properties by hand - or you can double-click on it to open it (visually) in Scene Builder.” Then Scene Builder enters the picture and the task is soon done. Check out the article here.

    Read the article

  • Worker roles in Windows Azure to host a multiplayer server

    - by MrWiggels
    I've been doing research on where to host a simple multi-player backend for a simple game I'm developing. So as a first choice I downloaded the Windows Azure SDK, which provides a nice and simple emulator environment where you can test out your application before uploading. I also download the Azure Social Game Toolkit (Visit), and followed as far as my understanding can take me. So, down to the main question. Is there anybody with experience developing Azure applications. I'm developing a Action RPG game, in a similar vein to Diablo III. I was thinking of putting up Matchmaking, Friends Lists, etc. Is there another way to connect to Azure services via something like UDP or TCP for sending packets or does everything have to go through HTTP requests? Is it even possible to use HTTP request/response for something like this? All game commands will be simple. Because the game server and the clients will be kept in-sync and will have deterministic actions, I'm just going to send actions like "Use Primary Skill" and "Use Secondary Skill". Any hints, ideas, light bulbs or a smack-in-the-face presentation will be much appreciated.

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • Boot problems w/ External HDD

    - by JeremyT
    I'm having a problem with booting a live image off of a USB Hard Drive. I used Startup Disk Creator to make an Ubuntu 11.10 live image on a partition on my external so that I can use it on various computers. I set it up with ~4GB of persistent data So far I've tried it on two computers. One, a Dell, will boot to my external but it loads the select boot device very slowly. It hangs on a screen with an underscore for up to 5 minutes until it allows me to select which drive I want to boot from. After that, it works great. However, on my primary computer it doesn't work at all. It's an Acer from 2009 and it won't even recognize the drive. I don't see it anywhere in BIOS or the select boot disk list. The light on the external will come on at boot just fine so I know it's powering it at this time. Both computers worked perfectly with my 4GB flash drive that I had partitioned half and half (1st half FAT32 and second Ubuntu 11.10 live)

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

  • VNC grey screen and start on boot 12.04

    - by Siriss
    I have 12.04 LTS installed and I am trying to get VNC to work. I want to be able to connect to existing sessions, and have it start on boot. I followed this guide and have left a comment to try and fix my problems but no dice. I have also tried all solutions I have found on google, including the one here, but I could not get it to work (I am missing something easy I am sure). When I connect to the VNC session I get a grey screen with three checkboxes: Accept clipboard from viewers Send clipboard to viewers Send primary selection to viewers Here is my xstartup: #!/bin/sh # Uncomment the following two lines for normal desktop: unset SESSION_MANAGER # exec /etc/X11/xinit/xinitrc gnome-session -session=gnome-classic & [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & I have also edited my to include: /usr/bin/vncserver -geometry 1024x768 It does not start on boot, but when I run the command it starts, but I get the grey screen. Any help would be greatly appreciated. Thank you!!

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • [Oracle Identity Manager] 11g R2 Bundle Patch 09 is Available!

    - by mustafakaya
    Oracle Identity Manager Bundle Patch 09 is available now. You can download BP09 from here. Also,there is a important recommendation for BP08!  List of bugs fixed with BP09; Bug:12699224 : Trusted source reconciliation fails to create users with many reconciliation field mappings. Bug:14407437 : Provisioning through bulk request inserts null records into child tables. Bug:14493217 : Target resource reconciliation throws ORA-06512 error when the Descriptive field is mapped to a field that does not have a reconciliation field mapping. Bug:16044671 : User form customization fails if a UDF contains invalid character. Bug:16545968 : Modifying any attribute on a service account changes the account type as a primary account. Bug:16562633 : Oracle Identity Manager throws javax.el.elexceptions while viewing profile under direct report. Bug:16662834 : User not reprovisoned after user is deleted and created in the target with the same orclguid. Bug:16662905 : If an LOV field is required on an Application Instance form, no validation is enforced on the LOV field although it is required. Bug:16701873 : The Members tab of a role displays only enabled users and does not display disabled users. Bug:16862846 : When a notification is being sent, the mail ID in the Reply To field is set as the recipient's mail ID instead of the sender's mail ID. Bug:16824062 : When you use API to fetch or delete child data from an account, the child data row value is null. Therefore, child data is not returned. Bug:16912736 : There is a performance issue when the provisioned application instance details is opened for a user.

    Read the article

  • Android Card Game Database for Deck Building

    - by Singularity222
    I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble.

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

  • Is it possible to to have all pre-unity effects in latest ubuntu?

    - by iamserious
    I've been an Ubuntu dilettante for a long, long time. It is not my primary OS but I've always had it on all my laptops and desktop machines for over 8 years now. What I really, really like(d) about ubuntu (or linux, for that matter) was the effects - desktop cube, wobbly windows and other such "cool" effects. Needless to say, I was heartbroken with Unity. I gave it the benefit of doubt and tried to like it, tried to love it. Stuck with it for over a year. But I recently came to the conclusion that unity is really not for me. I want my old ubuntu back, with all it's eye candy effects. I tried messing around with ccsm but it only seemed to make the matters worse. Now, I could just install the old version of ubuntu - but getting my bamboo pen and Nvidia to work with it is a son of a.. anyway, the point is, old version of ubuntu is outdated to work with my newer machines. I'd rather have a new version of ubuntu but without unity- I don't know what the eye candy stuff is called - and so my question is this - Is it possible to rid ubuntu of unity and get all the eye candy stuff and if it is not possible, can you please advice me what other linux supports all the effects, please? I've searched high and low for unity alternatives but didn't find any satisfactory solutions; so I think my question is mainly about what other linux flavour is better suited for the task - sorry if it is out of topic.

    Read the article

  • How do I configure WakeOnUSB properly?

    - by wishi
    How do I configure Wake-On-USB properly on a 10.04 or 10.10 Ubuntu (2.6.36 and higher if needed)? (Wake-on-USB is when the computer is asleep and for example a USB Keyboard event wakes up the machine!) The notebook is an Acer Aspire Timeline X 1830T. I don't know in which way the Linux Kernel supports the controllers. There are different ways to approach this, for example /proc/acpi/wakeup... or UDEV... or something with HAL? /proc/acpi/wakeup shows every device in S4, but I need S3. Device S-state Status Sysfs node P0P2 S4 *disabled PEGP S4 *disabled P0P1 S0 *disabled pci:0000:00:1e.0 EHC1 S4 *disabled pci:0000:00:1d.0 USB1 S4 *enabled USB2 S4 *disabled USB3 S4 *disabled USB4 S4 *disabled EHC2 S4 *disabled pci:0000:00:1a.0 USB5 S4 *disabled USB6 S4 *disabled USB7 S4 *disabled HDEF S0 *disabled pci:0000:00:1b.0 RP01 S5 *disabled pci:0000:00:1c.0 PXSX S5 *disabled pci:0000:01:00.0 RP02 S0 *disabled pci:0000:00:1c.1 PXSX S5 *disabled pci:0000:02:00.0 RP03 S0 *disabled PXSX S5 *disabled RP04 S0 *disabled PXSX S5 *disabled RP05 S0 *disabled PXSX S5 *disabled RP07 S0 *disabled PXSX S5 *disabled RP08 S0 *disabled PXSX S5 *disabled GLAN S0 *disabled PEG3 S4 *disabled PEG5 S4 *disabled PEG6 S4 *disabled SLPB S3 *enabled S4, which is Suspend-To-Disk afaik... doesn't seem to work either if I echo USB1 into the wakeup table. It just sets an S4 flag. can I get the USB ports in S3? I want to make the machine wakeup from Suspend-To-Ram (S3, ACPI standard) in case a key on my external keyboard is pressed. It only wakes up if a key on the internal Laptop keyboard is pressed... from Suspend To Ram. It seems if I plug in a USB mouse, that the USB port isn't even powered. I have no BIOS option to change this. Further specific information regarding the device: usb-devices T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 13 Spd=1.5 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=04d9 ProdID=1603 Rev=03.10 S: Manufacturer= S: Product=USB Keyboard C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=01 Prot=01 Driver=usbhid I: If#= 1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid root@underwater-laptop:/# lsusb [...] Bus 001 Device 013: ID 04d9:1603 Holtek Semiconductor, Inc. Bus 001 Device 004: ID 0bda:0138 Realtek Semiconductor Corp. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub [...] If this doesn't work I have to properly explain why :( - but I think it is very hard to research this kernel internal. Any hints for good information here? I hope it's possible... I'm just looking for any solution. edit: this, waking up on USB, works on Windows! Thanks a lot, Marius

    Read the article

  • pros-cons of separate hosting accounts versus using addon domain

    - by hen3ry
    Folks: For historical reasons, I have "Site A" on "Hosting Account A", and "Site B" on "Account B", totally independent accounts with the same vendor, Bluehost. Both are primary domains. Now that Hosting Account B is just about to expire, I'm considering letting it disappear and moving Site B to an Addon domain on "Account A". Both sites are non-commercial, narrow-interest, very-low-traffic, hundreds of page views per month. The file weights for the sites are non-trivial, especially as I like to install specialized CMSs in subdomains. Since Bluehost allows unlimited hosting space there should be no issue with the file load, except I've seen hints of an issue with total file count, maybe 50k files -- which I'm not currently close to hitting, but might eventually. My question: what are the pros and cons of using separate accounts versus hosting Site B as an addon domain? Obviously, using a single account is cheaper by half, and I know that my authoring environment (DreamWeaver CS5) complains when it detects nested source trees, telling me "Synchronization" might fail in such cases, but I don't depend on this feature. What other factors should I consider? TIA

    Read the article

  • Business Intelligence (BI) Defined

    CIO.com defines Business Intelligence (BI) as a generic reference to a collection of applications that are used to analyze raw organizational data. Typical BI activities include data mining, online analytical processing, querying and reporting. They further explain that the primary reason why a company would utilize BI is to make their more data accessible. The more accessible data is to the users the faster they can identify ways to reduce business cost, discover new business opportunities, and react quickly to adjust prices based on current supply and demand. One area in which a hospital system could use BI derived from a data warehouse can be seen in the Emergency Room (ER) in regards to the number of doctors and nurse they have working during a full moon for each ER location. In order determine this BI needs to determine a trend in the number of patients seen on a full moon, further more they also need to determine the optimal number of staff members working during a full moon be determining the number of employees to patients ration needed to meet standard patient times and also be the most cost effective for the hospital.  This will allow the hospital system to estimate the number of potential patients they will have on the next full moon and adjust their staff schedules accordingly to ensure that patient care is not affected in any way do the influx or lack of influx of patients during this time while also ensuring that they are only working the minimum number of employees to ensure that they still making a profit. Another area where a hospital system could use BI data regards their orders paced to drug and medical supply companies. BI could define trends in prescriptions given to patients, this information could be used for ordering new supplies and forecasting the amount of medicine each hospital needs to keep on site at a given time. For example, a hospital might want to stock up on materials need to set bones in a cast prior to the summer because their BI indicates that a majority of broken bones occur during the summer due to children being out of school and they have more free time.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • Empty interface to combine multiple interfaces

    - by user1109519
    Suppose you have two interfaces: interface Readable { public void read(); } interface Writable { public void write(); } In some cases the implementing objects can only support one of these but in a lot of cases the implementations will support both interfaces. The people who use the interfaces will have to do something like: // can't write to it without explicit casting Readable myObject = new MyObject(); // can't read from it without explicit casting Writable myObject = new MyObject(); // tight coupling to actual implementation MyObject myObject = new MyObject(); None of these options is terribly convenient, even more so when considering that you want this as a method parameter. One solution would be to declare a wrapping interface: interface TheWholeShabam extends Readable, Writable {} But this has one specific problem: all implementations that support both Readable and Writable have to implement TheWholeShabam if they want to be compatible with people using the interface. Even though it offers nothing apart from the guaranteed presence of both interfaces. Is there a clean solution to this problem or should I go for the wrapper interface? UPDATE It is in fact often necessary to have an object that is both readable and writable so simply seperating the concerns in the arguments is not always a clean solution. UPDATE2 (extracted as answer so it's easier to comment on) UPDATE3 Please beware that the primary usecase for this is not streams (although they too must be supported). Streams make a very specific distinction between input and output and there is a clear separation of responsibilities. Rather, think of something like a bytebuffer where you need one object you can write to and read from, one object that has a very specific state attached to it. These objects exist because they are very useful for some things like asynchronous I/O, encodings,...

    Read the article

  • methods DSA_do_verify and SHA1 (OpenSSL library for Windows)

    - by Rei
    i am working on a program to authenticate an ENC signature file by using OpenSSL for windows, and specifically methods DSA_do_verify(...) and SHA1(...) hash algorithm, but is having problems as the result from DSA_do_verify is always 0 (invalid). I am using the signature file of test set 4B from the IHO S-63 Data Protection Scheme, and also the SA public key (downloadable from IHO) for verification. Below is my program, can anyone help to see where i have gone wrong as i have tried many ways but failed to get the verification to be valid, thanks.. The signature file from test set 4B // Signature part R: 3F14 52CD AEC5 05B6 241A 02C7 614A D149 E7D6 C408. // Signature part S: 44BB A3DB 8C46 8D11 B6DB 23BE 1A79 55E6 B083 7429. // Signature part R: 93F5 EF86 1FF6 BA6F 1C2B B9BB 7F36 0C80 2F9B 2414. // Signature part S: 4877 8130 12B4 50D8 3688 B52C 7A84 8E26 D442 8B6E. // BIG p C16C BAD3 4D47 5EC5 3966 95D6 94BC 8BC4 7E59 8E23 B5A9 D7C5 CEC8 2D65 B682 7D44 E953 7848 4730 C0BF F1F4 CB56 F47C 6E51 054B E892 00F3 0D43 DC4F EF96 24D4 665B. // BIG q B7B8 10B5 8C09 34F6 4287 8F36 0B96 D7CC 26B5 3E4D. // BIG g 4C53 C726 BDBF BBA6 549D 7E73 1939 C6C9 3A86 9A27 C5DB 17BA 3CAC 589D 7B3E 003F A735 F290 CFD0 7A3E F10F 3515 5F1A 2EF7 0335 AF7B 6A52 11A1 1035 18FB A44E 9718. // BIG y 15F8 A502 11C2 34BB DF19 B3CD 25D1 4413 F03D CF38 6FFC 7357 BCEE 59E4 EBFD B641 6726 5E5F 0682 47D4 B50B 3B86 7A85 FB4D 6E01 8329 A993 C36C FD9A BFB6 ED6D 29E0. dataServer_pkeyfile.txt (extracted from above) // BIG p C16C BAD3 4D47 5EC5 3966 95D6 94BC 8BC4 7E59 8E23 B5A9 D7C5 CEC8 2D65 B682 7D44 E953 7848 4730 C0BF F1F4 CB56 F47C 6E51 054B E892 00F3 0D43 DC4F EF96 24D4 665B. // BIG q B7B8 10B5 8C09 34F6 4287 8F36 0B96 D7CC 26B5 3E4D. // BIG g 4C53 C726 BDBF BBA6 549D 7E73 1939 C6C9 3A86 9A27 C5DB 17BA 3CAC 589D 7B3E 003F A735 F290 CFD0 7A3E F10F 3515 5F1A 2EF7 0335 AF7B 6A52 11A1 1035 18FB A44E 9718. // BIG y 15F8 A502 11C2 34BB DF19 B3CD 25D1 4413 F03D CF38 6FFC 7357 BCEE 59E4 EBFD B641 6726 5E5F 0682 47D4 B50B 3B86 7A85 FB4D 6E01 8329 A993 C36C FD9A BFB6 ED6D 29E0. Program abstract: QbyteArray pk_data; QFile pk_file("./dataServer_pkeyfile.txt"); if (pk_file.open(QIODevice::Text | QIODevice::ReadOnly)) { pk_data.append(pk_file.readAll()); } pk_file.close(); unsigned char ptr_sha_hashed[20]; unsigned char *ptr_pk_data = (unsigned char *)pk_data.data(); // openssl SHA1 hashing algorithm SHA1(ptr_pk_data, pk_data.length(), ptr_sha_hashed); DSA_SIG *dsasig = DSA_SIG_new(); char ptr_r[] = "93F5EF861FF6BA6F1C2BB9BB7F360C802F9B2414"; //from tset 4B char ptr_s[] = "4877813012B450D83688B52C7A848E26D4428B6E"; //from tset 4B if (BN_hex2bn(&dsasig->r, ptr_r) == 0) return 0; if (BN_hex2bn(&dsasig->s, ptr_s) == 0) return 0; DSA *dsakeys = DSA_new(); //the following values are from the SA public key char ptr_p[] = "FCA682CE8E12CABA26EFCCF7110E526DB078B05EDECBCD1EB4A208F3AE1617AE01F35B91A47E6DF63413C5E12ED0899BCD132ACD50D99151BDC43EE737592E17"; char ptr_q[] = "962EDDCC369CBA8EBB260EE6B6A126D9346E38C5"; char ptr_g[] = "678471B27A9CF44EE91A49C5147DB1A9AAF244F05A434D6486931D2D14271B9E35030B71FD73DA179069B32E2935630E1C2062354D0DA20A6C416E50BE794CA4"; char ptr_y[] = "963F14E32BA5372928F24F15B0730C49D31B28E5C7641002564DB95995B15CF8800ED54E354867B82BB9597B158269E079F0C4F4926B17761CC89EB77C9B7EF8"; if (BN_hex2bn(&dsakeys->p, ptr_p) == 0) return 0; if (BN_hex2bn(&dsakeys->q, ptr_q) == 0) return 0; if (BN_hex2bn(&dsakeys->g, ptr_g) == 0) return 0; if (BN_hex2bn(&dsakeys->pub_key, ptr_y) == 0) return 0; int result; //valid = 1, invalid = 0, error = -1 result = DSA_do_verify(ptr_sha_hashed, 20, dsasig, dsakeys); //result is 0 (invalid)

    Read the article

  • Routing tables don't show ppp0 after 12.04 kernel upgrade to 3.5.0: Haier CE682 modem configuration

    - by ubunsteve
    I'm trying to get my Haier CE682 EVDO modem, model number 201e:1022 to work in ubuntu 12.04 kernel 3.5.0-030500-generic #201207211835 . I had it working in a previous 12.04 kernel, using compat-wireless and these instructions http://zulkhamsyahmh.blogspot.com/2012/05/install-smartfren-haier-ce682-on-ubuntu.html, and to get it working had to edit the routing tables so that there was a ppp0 showing up, as suggested at http://www.linuxquestions.org/questions/slackware-14/wvdial-is-connecting-but-im-unable-to-do-anything-714861/ Network manager doesn't work with this modem, so I use either wvdial or gpppon to connect to it, both which work (after I run the command sudo modprobe usbserial vendor=0x201e product=0x1022 ) This is the output of when I connect with gpppon to the modem: Using interface ppp0 Connect: ppp0 <-- /dev/ttyUSB0 sent [LCP ConfReq id=0x1 ] rcvd [LCP ConfAck id=0x1 ] rcvd [LCP ConfReq id=0x2 ] sent [LCP ConfAck id=0x2 ] sent [LCP EchoReq id=0x0 magic=0x819c86db] rcvd [CHAP Challenge id=0x1 <1ac8f12799e953967a3cc222c9254690, name = ""] sent [CHAP Response id=0x1 <6f12a903dc40915ca2761c17b87f8fbd, name = "smart"] rcvd [LCP EchoRep id=0x0 magic=0x0] rcvd [CHAP Success id=0x1 ""] CHAP authentication succeeded CHAP authentication succeeded sent [CCP ConfReq id=0x1 ] sent [IPCP ConfReq id=0x1 ] rcvd [IPCP ConfReq id=0x1 ] sent [IPCP ConfAck id=0x1 ] rcvd [CCP ConfReq id=0x1] sent [CCP ConfAck id=0x1] rcvd [CCP ConfRej id=0x1 ] sent [CCP ConfReq id=0x2] rcvd [IPCP ConfRej id=0x1 ] sent [IPCP ConfReq id=0x2 ] rcvd [CCP ConfAck id=0x2] rcvd [IPCP ConfNak id=0x2 ] sent [IPCP ConfReq id=0x3 ] rcvd [IPCP ConfAck id=0x3 ] not replacing existing default route via 192.168.3.1 local IP address 10.191.248.154 remote IP address 10.17.95.25 primary DNS address 10.17.3.244 secondary DNS address 10.17.3.245 as you can see there is a problem with "not replacing existing default route via 192.168.3.1" This it the out put of route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.3.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.3.0 * 255.255.255.0 U 2 0 0 wlan0 I had tried these commands, which had previously worked in the earlier kernel: route del default route add default ppp0 but that broke my wireless internet connection. I then added the default routing as shown above with sudo route add default gw 192.168.3.1 wlan0 So it seems I need to add or change the routing to show a ppp0 connection, but I don't know how to do that.

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >