Search Results

Search found 2714 results on 109 pages for 'extremely frustrated'.

Page 52/109 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • Exalogic Elastic Cloud Software (EECS) version 2.0.1 available

    - by JuergenKress
    We are pleased to announce that as of today (May 14, 2012) the Exalogic Elastic Cloud Software (EECS) version 2.0.1 has been made Generally Available. This release is the culmination of over two and a half years of engineering effort from an extended team spanning 18 product development organizations on three continents, and is the most powerful, sophisticated and comprehensive Exalogic Elastic Cloud Software release to date. With this new EECS release, Exalogic customers now have an ideal platform for not only high-performance and mission critical applications, but for standardization and consolidation of virtually all Oracle Fusion Middleware, Fusion Applications, Application Unlimited and Oracle GBU Applications. With the release of EECS 2.0.1, Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageability and extreme performance in a fully virtualized environment. The Exalogic Elastic Cloud Software 2.0.1 release brings important new technologies to the Exalogic platform: Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageabi! lity and extreme performance in a fully virtualized environment. Support for extremely high-performance x86 server virtualization via a highly optimized version of Oracle VM 3.x. A rich, fully integrated Infrastructure-as-a-Service management system called Exalogic Control which provides graphical, command line and Java interfaces that allows Cloud Users, or external systems, to create and manage users, virtual servers, virtual storage and virtual network resources. Webcast Series: Rethink Your Business Application Deployment Strategy Redefining the CRM and E-Commerce Experience with Oracle Exalogic, 7-Jun@10am PT & On-Demand: ‘The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure’ (featuring Gartner Analysts). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic Elastic Cloud,ExaLogic,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,ExaLogic 2.0.1

    Read the article

  • Is Oracle Database Appliance (ODA) A Best Kept Secret?

    - by Ravi.Sharma
    There is something about Oracle Database Appliance that underscores the tremendous value customers see in the product. Repeat purchases. When you buy “one” of something and come back to buy another, it confirms that the product met your expectations, you found good value in it, and perhaps you will continue to use it. But when you buy “one” and come back to buy many more on your very next purchase, it tells something else. It tells that you truly believe that you have found the best value out there. That you are convinced! That you are sold on the great idea and have discovered a product that far exceeds your expectations and delivers tremendous value! Many Oracle Database Appliance customers are such larger-volume-repeat-buyers. It is no surprise, that the product has a deeper penetration in many accounts where a customer made an initial purchase. The value proposition of Oracle Database Appliance is undeniably strong and extremely compelling. This is especially true for customers who are simply upgrading or “refreshing” their hardware (and reusing software licenses). For them, the ability to acquire world class, highly available database hardware along with leading edge management software and all of the automation is absolutely a steal. One customer DBA recently said, “Oracle Database Appliance is the best investment our company has ever made”. Such extreme statements do not come out of thin air. You have to experience it to believe it. Oracle Database Appliance is a low cost product. Not many sales managers may be knocking on your doors to sell it. But the great value it delivers to small and mid-size businesses and database implementations should not be underestimated. 

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Changing the Default Windows Phone 7 Deployment Target In Visual Studio 2010

    - by mbcrump
    After you download and install the January 2011 Windows Phone update, you will notice one annoying thing. The default deployment target for Windows Phone Projects in Visual Studio changes to Windows Phone 7 Device. Before the update, it defaulted to the Emulator. I found this extremely annoying as I’m more than likely going to test with the emulator before putting it on my actual device. Now to make things fair, Microsoft told you they were going to switch the default and even provided a solution, but you will have to check a tiny paragraph in the release notes. The good news is that its very easy to do: Simply navigate out to : %LocalAppData%\Microsoft\Phone Tools\CoreCon See the folder named, “10.0”? Go ahead and delete it. Now, the folder will be completely empty and if you fire up Visual Studio 2010 you will see we are now defaulting to the Emulator again. In my opinion, this should have been left at Emulator. Now, new WP7 developers will get a build error when they first start a WP7 project and will not know why until they read the error list.  Subscribe to my feed CodeProject

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • WiFi slow sometimes, reboot helps, how do I debug it?

    - by January
    Ubuntu 12.04.1 with all updates installed. Laptop Lenovo Thinkpad X230 with Intel Corporation Centrino Advanced-N 6205. WiFi sometimes becomes extremely slow. Often this occurs when I wake the system from suspend and connect to a different network. I find no obvious clues in system logs. /etc/init.d/network-manager restart doesn't help, but a reboot does. How can I go on with debugging this issue? In specific, which parts of the system should I try to restart (without a complete reboot)? I know of problems with Intel WiFi (see for example this question and the instructions here), but if that was the problem, I would expect the WiFi to be slow at all times, and not just sometimes. Also, I have a gut feeling that it might be a DNS issue (for example, getting a page from a known server is faster than accessing a new server), but I don't know how to tackle it. Update: despite numerous updates in the meanwhile, I still observe this behavior. It happens always when I access my WiFi router at home after returning from work; when I reboot my laptop, the connection speed is good again.

    Read the article

  • Internet unusably slow with Realtek Semiconductor Co., Ltd. RTL8111/8168B card

    - by user42424
    So I have recently installed Ubuntu 11.10 for a dual boot with wind 7. After the install I had like 300 updates, so I installed them. At first I could use the internet, although it was extremely slow. However now I cannot, sometimes it will load and others it will simply time out. When I try to download something it will either take forever or will not at all. This is a wired system. On Windows side my speeds are fine. Any help would be greatly appreciated. Also like I said I am new to Linux/Ubuntu so please be nice. One last thing, I also installed 11.10 for same dual boot on my laptop, and wireless speed is the same as on Windows? Only the wired desktop gives me the problem? Hear is some hardware info.. Hope it helps. Mobo: Gigabyte GA=880GMA- AMD / CPU: AMD Phenom (tm) IIx4 965 / 16 GB Ram / Realtek PCIe GBE Family Controller / Cisco Linksys E2000 / Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) / eth0 Link encap:Ethernet HWaddr 50:e5:49:33:64:cf inet addr:192.168.1.118 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe33:64cf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76722 errors:0 dropped:76722 overruns:0 frame:76722 TX packets:49692 errors:0 dropped:65 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107956638 (107.9 MB) TX bytes:4342553 (4.3 MB) Interrupt:44 Base address:0x2000 thanks to roadmr problem solved! I powered down PC, un plugged power from pc end, waited a few (maybe 3)minutes. plugged power back in, pushed and held power button for 30 + seconds. Let go, powered on PC, and my Internet is fine! downloads and web speed blaze, just like on my Win 7 boot, maybe even faster. Problem Solved, Thanks to all!! **

    Read the article

  • Models, collections...and then what? Processes?

    - by Dan
    I'm a LAMP-stack dev who's been more on the JavaScript side the last few years and really enjoying the Model + Collection approach to data entities that BackboneJS, etc. uses. It's helped me organize my code in such a way that it is extremely portable, keeping all my properties and methods in the scope (model, collection, etc.) in which they apply. One thing that keeps bugging me though is how to organize the next level up, the 'process layer' as you might call it, that can potentially operate on instances of either models or collections or whatever else. Where should methods like find() (which returns a collection) and create() (which returns a model) reside? I know some people would put a create() in the Collection prototype, but while a collection operates on models I don't think it's exactly right to create them. And while a find() would return a collection I don't think it correct to have that action within the collection prototype itself (it should be a layer up). Can anyone offer some examples of any patterns that employ some kind of OOP-friendly 'process' layer? I'm sorry if this is a fairly well-known discussion but I'm afraid I can't seem to find the terminology to search for.

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • Employee Engagement: Drive Business Value

    - by Kellsey Ruppel
    As we’ve been discussing this week, employee engagement is extremely important and you’ve probably realized that effectively engaging your employees is essential to driving business value. Your employees are the ones responsible for executing on the business’ objectives. Your employees (in the sales & service departments) are the ones interacting with your customers the most, so delivering on customer expectations and attaining high levels of customer engagement are simply not possible without successfully empowering these this stakeholder group. High employee and partner engagement can have many benefits including: Higher levels of employee productivity Longer employee retention Stronger, more enduring and more successful relationships Serving as ambassadors for an organization’s brand More likely to deliver excellent customer service Referring others for hire Recommending the organization’s products and services Sharing feedback with their colleagues In a way, engagement is a measure of employee investment in an organization’s mission and brand. And then you have the enablement piece of this as well.  It’s hard to imagine a high level of engagement existing among employees who don’t feel that they’ve been enabled to do their jobs very efficiently or effectively. You’re just not going to find high engagement among people if the everyday processes and technologies  they work with make it a challenge for them to access, share and manage the information  they need do their jobs or if they’re unable to effectively collaborate around the projects they’re working on. How does your organization measure on the employee engagement spectrum? We’ve got a number of different resources to help you get started! Portal Resource Center Video: Got a minute? WebCenter in Action Webcast Series Portal Engagement Webcast 

    Read the article

  • What is the value of workflow tools?

    - by user16549
    I'm new to Workflow developement, and I don't think I'm really getting the "big picture". Or perhaps to put it differently, these tools don't currently "click" in my head. So it seems that companies like to create business drawings to describe processes, and at some point someone decided that they could use a state machine like program to actually control processes from a line and boxes like diagram. Ten years later, these tools are huge, extremely complicated (my company is currently playing around with WebSphere, and I've attended some of the training, its a monster, even the so called "minimalist" versions of these workflow tools like Activiti are huge and complicated although not nearly as complicated as the beast that is WebSphere afaict). What is the great benefit in doing it this way? I can kind of understand the simple lines and boxes diagrams being useful, but these things, as far as I can tell, are visual programming languages at this point, complete with conditionals and loops. Programmers here appear to be doing a significant amount of work in the lines and boxes layer, which to me just looks like a really crappy, really basic visual programming language. If you're going to go that far, why not just use some sort of scripting language? Have people thrown the baby out with the bathwater on this? Has the lines and boxes thing been taken to an absurd level, or am I just not understanding the value in all this? I'd really like to see arguments in defense of this by people that have worked with this technology and understand why its useful. I don't see the value in it, but I recognize that I'm new to this as well and may not quite get it yet.

    Read the article

  • Install Base Transaction Error Troubleshooting

    - by LuciaC
    Oracle Installed Base is an item instance life cycle tracking application that facilitates enterprise-wide life cycle item management and tracking capability.In a typical process flow a sales order is created and shipped, this updates Inventory and creates a new item instance in Install Base (IB).  The Inventory update results in a record being placed in the SFM Event Queue.  If the record is successfully processed the IB tables are updated, if there is an error the record is placed in the csi_txn_errors table and the error needs to be resolved so that the IB instance can be created.It's extremely important to be proactive and monitor IB Transaction Errors regularly.  Errors cascade and can build up exponentially if not resolved. Due to this cascade effect, error records need to be considered as a whole and not individually; the root cause of any error needs to be resolved first and this may result in the subsequent errors resolving themselves. Install Base Transaction Error Diagnostic Program In the past the IBtxnerr.sql script was used to diagnose transaction errors, this is now replaced by an enhanced concurrent program version of the script. See the following note for details of how to download, install and run the concurrent program as well as details of how to interpret the results: Doc ID 1501025.1 - Install Base Transaction Error Diagnostic Program  The program provides comprehensive information about the errors found as well as links to known knowledge articles which can help to resolve the specific error. Troubleshooting Watch the replay of the 'EBS CRM: 11i and R12 Transaction Error Troubleshooting - an Overview' webcast or download the presentation PDF (go to Doc ID 1455786.1 and click on 'Archived 2011' tab).  The webcast and PDF include more information, including SQL statements that you can use to identify errors and their sources as well as recommended setup and troubleshooting tips. Refer to these notes for comprehensive information: Doc ID 1275326.1: E-Business Oracle Install Base Product Information Center Doc ID 1289858.1: Install Base Transaction Errors Master Repository Doc ID: 577978.1: Troubleshooting Install Base Errors in the Transaction Errors Processing Form  Don't forget your Install Base Community where you can ask questions to help you resolve your IB transaction errors.

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • Opportunities in Cloud Computing

    - by Paul Sorensen
    A recent article from CIO Journal indicates that there is an extreme labor shortage (in certain technology areas) that is is leading to upward pressure on wages for IT Workers. This represents a great opportunity for those with certain skill-sets, among which include Java (Oracle certification is mentioned specifically). The article points out that a key driver of the labor shortage is the expansion of cloud computing. Cloud computing is set up to make life extremely simple for end-users, but the model pushes the complexity to back-end systems which are sophisticated, enterprise-level computing stacks (Oracle has an extensive set of cloud computing solutions). These complex systems require very highly-skilled IT professionals (the best-of-the-best) to successfully develop, implement, administer and maintain them. What this mean for you is that there is opportunity for those who have the appropriate skills at the appropriate levels. If you want to be a part of this opportunity you should do a self-assessment of your own skill-sets and experience. Based upon your results you can decide where it would be most appropriate to spend your time and resources for the highest return on your investment. By expanding and sharpening your skills and by gaining greater experience you will be better prepared to take advantage of career opportunities (like this) that come along periodically. As you evaluate your needs remember that Oracle University has a tremendous selection of high-quality eduction offerings (including training and certification) that can you help move your career forward. Thanks and best of luck!

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Successful Fusion CRM Bootcamp in Paris - July 24-24th

    - by Richard Lefebvre
    The first Fusion CRM Bootcamp for EMEA partners successfully took place in the Paris Pullmann Bercy hotel on July 24-26th. The agenda covered 14 Fusion CRM topics in depth, including detailed presentations and hands-on exercises, delivered by a team of Fusion CRM experts from Oracle Product Development. 89 participants represented 55 companies from 14 different countries, attended this event which was also a great opportunity to network with Oracle Product Development and Alliances & Channels executives during the breaks and the "Fusion Lounge" session each day after the training. As expressed by the participants in the event survey, the overall satisfaction reached to an impressive percentage of 85+ with the response of “met or exceeded the expectations” and with individual comments such as: On top of the presentation of Fusion CRM as a product, this event allowed to better understand Oracle's product and rollout strategy. The ability to meet the development team was really a bonus. Extremely valuable information given that enables integrators to go on the road of Fusion CRM Excellent organization, good product information coverage and demonstration Additional Fusion CRM bootcamps are planed across EMEA in the next quarters, although they will probably be under a different format which is still to be defined.

    Read the article

  • Any examples of fair mmo games with quick completion

    - by Keith Player
    I'm looking for some example games for inspiration that allow from 10 to a large number of players at a time and can be completed in 10 to 30 minutes. I'm looking for something that would have extremely low bandwidth and not be dependent on chance or luck (i.e one player can't gain an unfair advantage because the computer put them in a better position). Realized on the way home that more clarifications might have been helpful. I'm looking to develop a pay-to-play competition that would allow a large number of players to compete in a relatively short period of time. One way would be to have an mmo that can be completed in 30 minutes, another way would be if you could have 10 person games that finish in under 5 minutes and then have the winners compete against each other until a winner is decided. I'm interested in any genre that would make for a fun/interesting game that doesn't depend on luck, so all players should have the same choice/availability of activities/resources and follow the same rules. Some possible games that could possibly be modified into what I want, would be bztanks (too easy to create a bot), diplomacy (takes too long), risk, some chess like game. I was just wondering if there are other game types to the ones I have been considering.

    Read the article

  • Cannot establish ssh connection to computer on local network

    - by ovangle
    I've just (re)installed ubuntu 11.10 on my main pc, and the connection times out every time I try to ssh connect to my laptop (over the local network) to retrieve the files I backed up there. The connection times out every time I try to connect. I can establish a connection in the other direction without issue. Here's the verbose output I get when I try to connect: ovangle@ruby-EP43-DS3:~$ ssh -v [email protected] OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 10.1.1.4 [10.1.1.4] port 22. debug1: connect to address 10.1.1.4 port 22: Connection timed out ssh: connect to host 10.1.1.4 port 22: Connection timed out ssh is installed on both machines, and I've tried deleting '~/.ssh/known_hosts' on both machines, still nada. I've changed the sshd logging on the laptop to VERBOSE and restarted the daemon (because I wasn't getting any relevant syslog entries otherwise), and this is the log for the most recent connection attempt. EDIT: posted wrong logs last time. They just showed that there was a connection received, they weren't actually the sshd logs (which were in auth.log as I recently discovered). Unfortunately, that log is filling up with extremely weird error messages and it gives me no information about the connection. Nov 8 16:02:18 ovangle-A6Rp pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000) Nov 8 16:02:18 ovangle-A6Rp pkexec: pam_ck_connector(polkit-1:session): cannot determine display-device Nov 8 16:02:18 ovangle-A6Rp pkexec[6270]: ovangle: Executing command [USER=root] [TTY=unknown] [CWD=/home/ovangle] [COMMAND=/usr/sbin/gnome-power-backlight-helper --set-brightness 2] Nov 8 16:02:19 ovangle-A6Rp pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000) Nov 8 16:02:19 ovangle-A6Rp pkexec: pam_ck_connector(polkit-1:session): cannot determine display-device Nov 8 16:02:19 ovangle-A6Rp pkexec[6273]: ovangle: Executing command [USER=root] [TTY=unknown] [CWD=/home/ovangle] [COMMAND=/usr/sbin/gnome-power-backlight-helper --set-brightness 7]

    Read the article

  • CMS and Databases vs. DIY

    - by hozza
    I have been programming for many years now, primarily in PHP and the like and would consider myself an intermediate programmer. Some of my online projects have now gone global and very widely used, i am now in deep thought about scalability etc. All of my systems so far are written in PHP, no known database structure such as MySQL etc. Instead our databases use an 'operating system style' method of storing information, files and folders if you will. We also do not use any outside/third-party software or CMS, so far this has work out extremely well. Most people, when they hear about the way we do things, criticize and say that is an idiotic idea but normally after seeing our systems in more dept are converted to our way of doing things. Is it really that bad to not use a standard databasing systems and only using the one (slightly heavier than others) language of PHP? How well on the face of it will this kind of setup scale? N.B. Our systems include things such as account and user management, documentation development and task/project managing.

    Read the article

  • Do I expect too much work from an employer? [closed]

    - by Ant
    I recently switched jobs because I was not challenged enough, the work would come in waves, and I HATED the people I worked with. I am a recent college grad, May 2009, and based off the 3 internships I had, and 2 full time jobs I obtained, I am finding that employers can not keep me satisfied with the amount of work. At my new job, I like the people I work with, I am challenged, but I still do not get enough work. I hate down time. I always want to have something to work on AT LEAST 6 out of the 8 hours. I was surprised that my new employer actually hired me because the majority of the technologies they implement, I had minimal exposure to. I never programmed in the technologies they use outside of one class in college. My greatest strength is that I am an extremely fast learner. I can pick up new technologies with relative ease. They gave me a project to work on by myself and I think they assumed it would take me longer to complete. Now that I finished that app, they are struggling to find something for me to do. I am not sure if it is bad timing being close to the holidays, my manager dealing with personal issues at home, how quickly I finished the first project, or that I expect too much out of an employer? If so, what are good things to do on all this downtime?! EDIT: Thanks for all the feedback! EDIT 2: I am going to "unaccept" the answer in an effort to keep the question open. As a few people have mentioned, this is a great discussion on how to grow as a new worker in the programming field. EDIT 3: I am attempting to revive this question so the moderators will see the support to re-open it.

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Which iPhone ad API has produced the highest revenue for you?

    - by Kyle Humfeld
    This isn't a technical question, but more of a request for advice and empirical/anecdotal data. I'm nearly done writing a free app for iPhone, and I'm at the stage where I'm going to put ads into the app. I've had mixed success in the past with iAd (their fill rates have been atrocious recently, and their payouts have cut by about 75% over the past 4 months or so), and would like to know how much ad revenue you, the community, has seen from the various ad APIs you've used for your iPhone apps. This isn't a request for opinion, i.e. which is 'better', only what kinds of numbers you're seeing. I don't need absolute figures, but 'iAd pays x% higher than AdMob, and y% lower than AdSense' would be extremely helpful to me as I make my decision as to which ad API to integrate into my App. Also, have you had any experience or success with integrating multiple ad APIs into the same app? That's something I'm considering doing in my current iAd-filled apps (particularly my iPad app, which has yet to receive a single impression after nearly 60,000 requests)... something like: 1) Request-from-iAd 2) if that fails, request-from-adSense 3) if that fails, request-from-adMob 4) if that fails, ... etc.

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • Change Keybindings (hardware to software)

    - by Daniel
    I ran a search for this, but the answers I saw were referring to something altogether different than what I'm asking for. So let me clarify: I'm not asking how to change key-combo shortcuts. I'm asking--how do you actually change what your computer thinks you did when you press a given key? An example of what I mean (and the reason I'm asking). I'm a Chrome user, and I use Windows alongside Ubuntu. I own a Lenovo Thinkpad T61p--it came with my scholarship package, and I would have shopped for a nice computer if I could have. The T61p has two buttons above the left and right arrow keys that relate to browser commands to go back and forth one page. This is extremely frustrating for me, as I use the arrow keys, and a single accidental keystroke will catch me going back a page, losing temporary data, and yelling at my stupid keyboard. At the same time, I'm the type of person who keeps way too many tabs open. Chrome doesn't let me refigure keyboard shortcuts, and the only way it allows you to switch between tabs are ctrl+tab and ctrl+shift+tab, and ctrl+page up/down. I was using Notepad++, and they had finally found the solution to both problems! The page back and forth keys functioned as tab back and forth keys. I went through quite some effort to learn how to change the keybindings in Windows. The page back and page forward keys are now the page up and page down keys, respectively, and if I hit control, they let me switch tabs easily, and rather pleasantly. And if I hit the keys by accident, no harm, no foul. Alas, I'm in Ubuntu now, and I need to go through the process again. And while I couldn't just find the answer online, like I did for Windows, I know Ubuntu has nice, supportive communities like this one, where, hopefully, somebody can tell me how to do either what I did in Windows, or directly make it so that my computer changes tabs when I hit those buttons (removing the ctrl button from the tab-changing command).

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >