Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 557/1073 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • Hosting a site on amazon ec2

    - by Khalid Mushtaq
    I have recently bought an amazon ec2 instance. Now I want to host a website. I have googled and found some useful info but there is some confusion in my mind. Suppose domain name is "http://www.example.com" That's what I have done so far. I have configured my domain locally on amazon ec2 instance and it's working fine when I open that url in amazon ec2 instance's browser. I have used http://www.example.com in /etc/hosts file point it to 127.0.0.1 to open locally on instance. I have got one elastic ip address and associated it with the instance. I have changed http://www.example.com A's record with the elastic IP that I have got in above step. Now what should I do? When some user will open my website anywhere in the world, will it get pointed to my instanace's ip address? Have I done proper configurations for website on instance?

    Read the article

  • Hybrid Graphics Functional won't work with my Asus UL30V anymore

    - by futuress
    The problem is that I am no longer able to boot in compatibility mode for just turning on my Nvidia graphics to install the driver. Because no login screen will appear if Ubuntu is loading. In Ubuntu 11.10 I was able to activate nvidia graphics only' option this way: 1) Change BIOS to 'compatibility mode' which will turn off the Intel card. 2) Install the Nvidia proprietary driver using Ubuntu's driver finder (Additional Drivers) and then reboot. I was not interested using only the Intel graphics, for the sake of battery life. Now I have both cards running and they drain my battery life dramatically. And the main problem of this configuration no OpenGL is available, so I can't play any games any more. At this point, I have a pre-solution. I uninstalled the nvidia drivers and installed bumblebee. Now the Intel card is recognized. I would prefer to run just the nvidia card as in Ubuntu 11.10 but for now this is better than nothing. Does anybody else have the same problem?

    Read the article

  • Problems with wired ethernet connection Ubuntu 11.10

    - by Andrew Fielden
    After some partition shuffling, I've got a problem on my 11.10 system. The wired ethernet interface fails to come up, although the wireless interface is working. I'm using NetworkManager. I thought this may be a problem with NetworkManager, so I checked the config files, which look ok. I then tried re-installing the package, but this didn't resolve the issues. I'm not sure at this point if the problem is due to software configuration, or a hardware problem. I've also tried the cable in other router ports, but same problem. The symptoms are:- System settings - Network reports that the cable is unplugged (it isn't) ifconfig reports the following eth0 Link encap:Ethernet HWaddr f0:4d:a2:a2:a7:fe UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:10 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:792 (792.0 B) TX bytes:0 (0.0 B) Interrupt:46 Base address:0xe000 My /etc/network/interfaces file has the following: auto lo iface lo inet loopback My /etc/resolv.conf file has the following: # Generated by NetworkManager The router's red light is on for this port dmesg reports ADDRCONF(NETDEV_UP): eth0: link is not ready

    Read the article

  • Why was my site rejected for Google Adsense?

    - by hyuun jjang
    I have a 3 year old blog and its got around 16 articles/tutorials about some programming problems and solutions. It's getting pretty much a lot of view lately so I decided to apply for a google adsense account. When I first applied via blogger, google replied with the following statement: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: - Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). - You must provide accurate personal information with your application that matches the information on your domain registration. - Your website must contain substantial, original content... So, as I understood it, I decide to buy a domain and point my blogger blog to that new naked domain. and here is the newly bought domain where all the contents of my old blog resides. http://icodeya.com/ I reapplied, hoping that this time, I will make the cut. But then I got this reply Further detail: Unable to review your site: While reviewing http://www.icodeya.com/, we found that your site was down or unavailable. We suggest you check whether there was a typo in the URL submitted. When your site is operational, you can resubmit your application with the correct site by following the directions below. I'm a bit disappointed. Maybe I did something wrong with DNS configuration or something. But you can clearly see that my site is fully functional. I heard that google sends robots to crawl on to the site etc. It's just sad because I invested on a domain name, and now I can't even find ways to earn from it. Any tips?

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Strange message during the Linux boot and slow start caused by "udevd[336] timeout ... "

    - by Kyrol
    When Debian (wheezy testing version) is loading, at a certain point appears a strange message: udevd[336] timeout usb_id--export /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-2.2/1-1.2:1.0 video4linux/ [502] after this message, start another message that loops for 120 secs: udevd[336] killing /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-2.2/1-1.2:1.0 video4linux/ [502] When the loop finish, Debian start normally and nothing seems to be "broken"! I also killed the loop with CTRL-C to break the loop and the system doesn't give any problem. Does anyone know a possible answer?

    Read the article

  • How do you motivate peers to become better developers?

    - by Brian Rasmussen
    In my experience there seems to be two kinds of developers (if we simplify matters a great deal of course). On the one hand we have the developers, who may do a perfectly acceptable job, but who do not really care about the computer science part of their craft. They usually know few languages / technologies and are happy to let things stay that way. For whatever reason, they don't try to improve their computer science skills unless this is required in their current position. On the other hand, we have the geeks or the pragmatic programmers if you subscribe to that idea. They play around with other languages and technologies and usually have knowledge about several topics outside the technical domain of their current job. I would like to see more developers, who are enthusiastic about software development. If you share this point of view, what do you do to push your peers in that direction? Edit: follow-up question inspired by one of the answers: As non-managers, should we really care about this? And why/why not?

    Read the article

  • How to handle Real Time Data from a database perspective?

    - by balexandre
    I have an idea in mind, but it still confuses me the database area. Imagine that I want to show real time data, and using one of the latest browser technologies (web sockets - even using older browsers) it is very easy to show to all observables (user browser) what everyone is doing. Remy Sharp has an example about the simplicity about this. But I still don't get the database part, how would I feed, let's imagine (using Remy game Tron) that I want to save the path for each connected user in a database and if a client wants to see what is going on with a 5 sec delay, he will see that, not only the 5 sec until that moment but the continuation in time ... how can I query a DB like that? SELECT x, y FROM run WHERE time >= DATEADD(second, -5, rundate); is not the recommended path right? and pulling this x in x time ... this is not real data feed correct? If can someone help me understand the Database point of view, I would greatly appreciate.

    Read the article

  • Rant - Why is Windows Azure not available in Africa?

    - by Allan Rwakatungu
    Yesterday at the .NET user group meeting in Kampala Uganda  I gave a talk on cloud computing with Windows Azure  (details will be in my next blog post). The guys where excited. Without owning they own inftrastucture and at low cost they can build scalable , highly available applications. Not quite. Azure accounts are only available to people in particular countries - none from Africa. I attended PDC in 2008 when Microsoft unleashed Windows Azure. One of the case studies to show the benefits ofr cloud computing was a project in Africa for an education service in Ethiopia. The point they where making was that the cloud was perfect for scenarios where computing infrastructure is not sophiscated, like Ethiopia. Perfect , i thought. So i got my beta account from PDC and started playing around in the cloud. Then Azure goes live , my beta account does not work any more and I cant pay because am from Uganda. Microsoft , this sucks. I dont know the reasons for Microsoft doing this, but am sure we can work out something. We in Africa need the cloud more than anybody else in the world. Setting up data centers that are higly scalable and available for our startups is not an option we have. But we also cant pay for cloud computing with Microsoft. Microsoft, we know we are a tiny insigficant market for a company your size, but your excluding us only continues to widen the digital divide. Microsoft , how about you have a reseller model for cloud computing. Instead of trying to deal direclty with each client you have local partners who help you sell and bill your cloud services. I think that would lead to Windows Azure being available in Africa. I can help you resell in Uganda.

    Read the article

  • LAN access via USB from Ipod Touch?

    - by Alec
    I need to browse the local web server from my Ipod Touch to test apps we're developing. I'm not allowed to install a separate wireless access point which would be the easiest solution. Can I use the USB cable for this? Also, the local PC is a Dell Mini 9 running Ubuntu. Has anyone managed to use the wireless port to create an ad hoc connection to an Ipod Touch so the Ipod can browse the Ubuntu web server? This would be an alternate option for me. Thank you!

    Read the article

  • Servicing a WIM image with the recently released SP1 for Windows 7/Server 2008 R2

    - by noonand
    Hello, I normally hang out over on SO but wanted to ask this question in a more appropriate forum. I have, for the sheer fun of learning how to do (as I'm sick of GHOST and dd images) set up WDS and captured a reference WIM file. One of the things that I remember being promised was the ability to do offline servicing of WIM files. I'm just wondering what the actual procedure is for this? I have the full SP1 ISO down (covers 7 and 2008R2) and was wondering about next steps. If someone could point me in the right direction I'd appreciate it greatly. Thanks - Derek

    Read the article

  • Book recommendation for learning server management and Apache

    - by japancheese
    Hello, I'm currently managing a site that I feel could be optimized and utilized to be much faster, however, I'm having difficulty finding reliable information to do it. I find the Apache documentation to be a hard read, and too technical about things I don't have a strong grasp on. I'm just looking for a good beginner/intermediate book about server administration to learn as much as possible about Apache, as well as how to create a nice secure, robust server that doesn't crash at the first hint of unusual traffic surges. Thanks to anyone who can point me in the right direction.

    Read the article

  • Solution to time shifting requirement in Active Directory

    - by MikeR
    Hi, I currently have an active directory that has several child domains (consisting of nothing other than a DC and bespoke application servers) set-up for testing our CRM software, as some of it is date/time sensitive these have been set to dates in the future at some point in the past, which is causing replication errors. I'm working on getting rid of these child domains, but still have a requirement for our testers to be able to time shift. Does anyone know of any solutions that would allow our test environments to have their time changed (always forward), without affecting the production active directory? Is it as simple as creating a separate Forest on the same LAN or would that interfere with my production Forest? Thanks for any advice.

    Read the article

  • how can I git-revise configs in my /etc/ dir? (sudo has different keys..)

    - by Dean Rather
    I'd like to keep some of the folders in my /etc/ dir git-revised, cause I'm quite new to server administration and am constantly messing around in my /etc/nginx/ and /etc/bind/ directories. I've heard of people git-revising their either /etc/ directories, but that seems a bit like overkill, as at this point I'm only messing in those 2 subdirectories. The problem I'm having is that if I sudo my git operations, I don't have the right pubkeys to push to my remote repo (bitbucket). But if I don't sudo, I need to mess around with all the permissions (again, not very pro at this). Does anyone know best practices for managing their configs? or how I should solve this problem? Thanks, Dean. PS. It's Ubuntu 12.04, Git, nginx, bind9, amazon aws, bitbucket...

    Read the article

  • Output to TV only works if monitor is plugged in

    - by Greg Sansom
    I am trying to use my TV as the sole output for a computer. The computer has a graphics card with 2 outputs - one RGB and one DVI. I have the RGB output going to the TV, and when the DVI output is connected to a monitor, display works fine on both the monitor and the TV. If I turn off the monitor, or even turn off power to the monitor, the TV continues to display the desktop. If I unplug the DVI cable from the monitor (remember that the monitor doesn't have any power at this point), the TV stops presenting the desktop and displays a "Not Accepted" message. When starting up the computer, the TV displays fine, but stops working at the "press ctrl-alt-delete" screen unless the monitor is connected. How can I make the TV show the display without the monitor? The TV is an LG RT-42PZ45V. The graphics card is an ATI Radeon series HD4350 512MB GDDR2. The computer is running Windows Server 2008 r2.

    Read the article

  • Is there such a thing as a super programmer? [closed]

    - by Muhammad Alkarouri
    Have you come across a super programmer? What identifies him or her as such, compared to "normal" experienced/great programmers? Also. how do you deal with a person in your team who believes he is a super programmer? Both in case he actually is or if he isn't? Edit: Interesting inputs all round, thanks. A few things can be gleaned: A few definitions emerged. Disregarding too localised definitions (that identified the authors or their acquaintance as super programmers), I liked a couple definitions: Thorbjørn's definition: a person who does the equivalent of a good team consistently for a long time. Free Electron, linked from Henry's answer. A very productive person, of exceptional abilities. The explanation is a good read. A Free Electron can do anything when it comes to code. They can write a complete application from scratch, learn a language in a weekend, and, most importantly, they can dive into a tremendous pile of spaghetti code, make sense of it, and actually getting it working. You can build an entire businesses around a Free Electron. They’re that good. Contrasting with the last definition, is the point linked to by James about the myth of the genius programmer (video). The same idea is expressed as egoless programming in rwong's comment. They present opposite opinions as whether to optimise for such a unique programmer or for a team. These definitions are definitely different, so I would appreciate it if you have an input as to which is better. Or add your own if you want of course, though it would help to say why it is different from those.

    Read the article

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • Personal Software Process (PSP1)

    - by gentoo_drummer
    I'm trying to figure out an exercise but it doesn't really makes to much sense.. I'm not asking someone to provide the solution. just to try and analyse what needs to be done in order to solve this. I'm trying to understand which PSP 1.0 1.1 process I should use. PROBE? Or something else? I would greatly appreciate some help on this one from someone that has experience with the Personal Software Process Methodology.. Here is the question. For the reference case (“code1.c”), the following s/w metrics are provided: man-hours spent in implementation phase (per-module): 2,7 mh/file man-hours spent in testing phase (per-module): 4,3 mh/file estimated number of bugs remaining (per-module): 0,3 errors/function, 4 errors/module (remaining) Based on the corresponding values provided for the reference case, each of the following tasks focus on some s/w metrics to be estimated for the test case (“code2.c”): [25 marks] (estimated) man-hours required in implementation phase (per-module) [8 marks] (estimated) man-hours required in testing phase (per-module) [8 marks] (estimated) number of bugs remaining at the end of testing phase (per-module) [9 marks] Tasks 4 through 6 should use the data provided for the reference case within the context of Personal Software Process level-1 (PSP-1), using them as a single-point historic data log. Specifically, the same s/w metrics are to be estimated for the test case (“code2.c”), using PSP as the basic estimation model. In order to perform the above listed tasks, students are advised to consider all phases of the PSP software development process, especially at levels PSP0 and PSP1. Both cases are to be treated as separate case-studies in the context of classic s/w development.

    Read the article

  • Aironet 1200's Auto-Channel Feature: When should it be used?

    - by Josh Brower
    In our building we have around 25 1200 series Aironets, with a bit of overlap in some areas. Up until this point, we have had them deployed in alternating 1/6/11 channels, but we are wondering if we would get better performance if we used the auto-channel select feature. In looking around, I have seen comments that this feature should not be used as the WAP does a channel scan only on the radio startup, but I have not found this in any Cisco docs. Anybody have anymore information, or real-world experience with this feature? Thanks! -Josh

    Read the article

  • Book Review&ndash;Getting Started With OAuth 2.0

    - by Lori Lalonde
    Getting Started With OAuth 2.0, by Ryan Boyd, provides an introduction to the latest version of the OAuth protocol. The author starts off by exploring the origins of OAuth, along with its importance, and why developers should care about it. The bulk of this book involves a discussion of the various authorization flows that developers will need to consider when developing applications that will incorporate OAuth to manage user access and authorization. The author explains in detail which flow is appropriate to use based on the application being developed, as well as how to implement each type with step-by-step examples. Note that the examples in the book are focused on the Google and Facebook APIs. Personally, I would have liked to see some examples with the Twitter API as well. In addition to that, the author also discusses security considerations, error handling (what is returned if the access request fails), and access tokens (when are access tokens refreshed, and how access can be revoked). This book provides a good starting point for those developers looking to understand what OAuth is and how they can leverage it within their own applications. The book wraps up with a list of tools and libraries that are available to further assist the developer in exploring the APIs supporting the OAuth specification. I highly recommend this book as a must-read for developers at all levels that have not yet been exposed to OAuth. The eBook format of this book was provided free through O'Reilly's Blogger Review program. This book can be purchased from the O'Reilly book store at: : http://shop.oreilly.com/product/0636920021810.do

    Read the article

  • How to customize Windows 7 Explorer Navigation?

    - by Chris
    i would like to customize the left side of my windows explorer. Can someone point me to a tutorial how to do this or maybe even give me a solution? i would like to change it from left (how it looks like right now) to the way it looks on the right side. I have found a few QA's on this page how to remove this or that, but i want to customize it how i like, not only remove something. There also seems to be a tool available but it looks like that it only permits to hide/show navigation entries but not to re-order them. Thanks for any help! chris

    Read the article

  • SharePoint 2010 Video Training

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Yes, the DVD is finally available. This is an exhaustive 14 hour video course that Carl and I recorded back in April. It is an end-to-end overview of SharePoint 2010. You can view more details including ordering information about the DVD here. And if you’re interested, a SharePoint 2007 video training version is also available. Carl and I worked quite hard on putting these together, so we hope you enjoy these. Detailed Table of Contents: Introduction (13:49) 30,000 Foot Overview (42:07) Application Management (43:35) User Experience (16:00) Writing Code Part 1 (1:07:49) Writing Code Part 2 (34:41) Simple Web Parts (14:01) Visual Web Parts (6:35) Pages (35:02) Putting it All Together (29:13) Client Side Technology (49:19) ADO.NET Data Services (51:29) Custom Data Services (43:30) Managing Data (29:02) Managing Data: Content Types (17:11) Managing Data: Events (19:22) Managing Data: List Scalability (35:51) Managing Data: Querying (20:07) Enterprise Content Management: DocumentIDs and Document Sets (16:44) Enterprise Content Management: Metadata Infrastructure (22:13) Enterprise Content Management: Record Management (26:27) Enterprise Content Management: Content Organizer (7:21) Enterprise Content Management: Enterprise Content Types (11:21) Business Connectivity Services (BCS) in the SharePoint Designer (26:09) BCS in Visual Studio (9:57) Workflows in the SharePoint Designer (22:07) Workflows in Visual Studio (19:01) Business Intelligence (21:14) Excel (15:25) Performance Point (24:37) Security: Claims-Based Authentication (27:13) Security: Secure Store Service (11:04) Security: The SharePoint Object Model (11:16) Comment on the article ....

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • hung up troubleshooting packet discards

    - by Chris Satola
    I realize my question is generic, but hopefully someone may have some guidance for me. My network consists of Cisco switches. I am seeing a significant amount (upwards of millions of packets per day) transmit drops between two switches. One being a 3750 and the other a 3560. The peak throughput of this link is only upper 400Mbps, so it shouldn't be a bandwidth issue. At this point, I am sort of clueless where to look or what tools I can use to determine what packets are dropping and why. I can setup a SPAN port on that link and wireshark it, but I don't know if that could tell me anything. Does anyone have any suggestions? Thanks in advance.

    Read the article

  • iSCSI: LUNs per target?

    - by badnews
    My question relates specifically to ZFS/COMSTAR but I assume is generally applicable to any iSCSI system: Should one prefer to create a target for every LUN that you want to expose? Or is it good practise to have a single target with multiple LUNs? Does either approach have a performance impact? And is there some crossover point where the other approach makes sense? The use case is for VM disks, where each disk (zvol) is a LUN. So far we have created a a separate target for each VM; but a single target that contains all the LUNs would probably greatly simplify management... but we may need hundreds of LUNs per a single target. (And then possibly tens of initiator connections to that target)

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >