Search Results

Search found 23603 results on 945 pages for 'non technical manager'.

Page 160/945 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Philly GiveCamp 2010

    - by wulfers
    Spent the weekend helping out several non-profits doing what we like to do best...  Designing, developing and making people very happy with their new websites, systems, applications and features.  Form what I saw at this GiveCamp about 75 percent of the non-profits needed updated or new websites supporting CMS features and the ability for staff members to update the content on their websites.... Some cool apps were designed and developed..... A centralized system for distribution of daily schedules and task assistance to autistic clients was a show stopper with an awesome central management interface, IPhone and in the future windows phone support for tactile, auditory and visual cues. SharePoint was upgraded and forms were automated for a volunteer fire company that desperately needed some automation to help the fireman do their primary job. Many cool sites for non-profits that had ether an outdated or non existent web presence.

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • "Automatically Connect" option for Mobile Broadband crashes GNOME Shell, how to remove network configurations?

    - by Kush
    I'm using Fedora 15, in GNOME Shell, my mobile broadband connection was working absolutely fine, until I set the connection type to Connect Automatically using nm-connection-manager. Now, when I start the Fedora, the Top panel network icon shows red exclamation symbol and when I click it, instead of showing me available networks' list, it shows only "Network Settings", and when I open it, it shows GNOME 3's new Network Manager app, and it pops out the dialog saying that, "Current network settings service is incompatible with this version". And after a few seconds of log in, the shell freezes and all I can do is log out using Ctrl+Alt+BackSpace. I'm facing this problem since I opened old network manager app using nm-connection-manager in the run dialog, and editing my connection to connect automatically. After logging in to the shell I somehow managed to delete that connection from the same app and created a new one, but the problem still exists. How can I delete all network preferences (by deleting its configuration files from my home directory or something like that) and reset the GNOME 3's network manager to its default state?

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Epsilon : An Oracle Customer Profile

    - by Anand Akela
    ZDNet published an article today based on the interview of Jeff White, vice president, technology, strategic database services at Epsilon. Jeff discussed Oracle Exadata Database Machine and Oracle Enterprise Manager with the ZDNet writer Dan Kusnetzky . Read the article  Epsilon : An Oracle Customer Profile . Jeff White, Epsilon VP, was honored with Oracle’s Data Warehouse Leader of the Year for Innovative Data Warehouse Deployment of Oracle Exadata and Oracle Enterprise Manager earlier this year. In one of the videos earlier this year, Jeff mentioned that Epsilon has streamlined IT administration, monitoring, and engineered systems maintenance with Oracle Enterprise Manager. Having gained in operational efficiencies, Epsilon is now providing greater efficiencies to its customers. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Two Sun Certification Exams To Retire August 1, 2010

    - by Paul Sorensen
    Effective August 1, 2010, Exam CX-310-400 ("Sun Certified Integrator for Identity Manager 7.1"), currently part of the "Sun Certified Integrator for Identity Manager 7.1" certification track, will be retired. We will also retire Exam CX-310-502 ("Sun Certified Java CAPS Integrator"), currently within the "Sun Certified Java CAPS Integrator" certification track. Both exams will remain available for registration and testing at Prometric Testing Centers through July 31, 2010.CREDENTIAL VALIDITYPlease note that that these credentials remain valid indefinitely for those holding the certifications. These retirements therefore have no effect on those who complete the certification requirements before August 1, 2010.QUICK LINKSRetiring Exams:Exam CX-310-400 "Sun Certified Integrator for Identity Manager 7.1"Exam CX-310-502 "Sun Certified Java CAPS Integrator" Certification Tracks:Sun Certified Integrator for Identity Manager 7.1Sun Certified Java CAPS IntegratorLearn more: Oracle Certification Retirements

    Read the article

  • Easy Made Easier

    - by dragonfly
        How easy is it to deploy a 2 node, fully redundant Oracle RAC cluster? Not very. Unless you use an Oracle Database Appliance. The focus of this member of Oracle's Engineered Systems family is to simplify the configuration, management and maintenance throughout the life of the system, while offering pay-as-you-grow scaling. Getting a 2-node RAC cluster up and running in under 2 hours has been made possible by the Oracle Database Appliance. Don't take my word for it, just check out these blog posts from partners and end users. The Oracle Database Appliance Experience - Zip Zoom Zoom http://www.fuadarshad.com/2012/02/oracle-database-appliance-experience.html Off-the-shelf Oracle database servers http://normanweaver.wordpress.com/2011/10/10/off-the-shelf-oracle-database-servers/ Oracle Database Appliance – Deployment Steps http://marcel.vandewaters.nl/oracle/database-appliance/oracle-database-appliance-deployment-steps     See how easy it is to deploy an Oracle Database Appliance for high availability with RAC? Now for the meat of this post, which is the first in a series of posts describing tips for making the deployment of an ODA even easier. The key to the easy deployment of an Oracle Database Appliance is the Appliance Manager software, which does the actual software deployment and configuration, based on best practices. But in order for it to do that, it needs some basic information first, including system name, IP addresses, etc. That's where the Appliance Manager GUI comes in to play, taking a wizard approach to specifying the information needed.     Using the Appliance Manager GUI is pretty straight forward, stepping through several screens of information to enter data in typical wizard style. Like most configuration tasks, it helps to gather the required information before hand. But before you rush out to a committee meeting on what to use for host names, and rely on whatever IP addresses might be hanging around, make sure you are familiar with some of the auto-fill defaults for the Appliance Manager. I'll step through the key screens below to highlight the results of the auto-fill capability of the Appliance Manager GUI.     Depending on which of the 2 Configuration Types (Config Type screen) you choose, you will get a slightly different set of screens. The Typical configuration assumes certain default configuration choices and has the fewest screens, where as the Custom configuration gives you the most flexibility in what you configure from the start. In the examples below, I have used the Custom config type.     One of the first items you are asked for is the System Name (System Info screen). This is used to identify the system, but also as the base for the default hostnames on following screens. In this screen shot, the System Name is "oda".     When you get to the next screen (Generic Network screen), you enter your domain name, DNS IP address(es), and NTP IP address(es). Next up is the Public Network screen, seen below, where you will see the host name fields are automatically filled in with default host names based on the System Name, in this case "oda". The System Name is also the basis for default host names for the extra ethernet ports available for configuration as part of a Custom configuration, as seen in the 2nd screen shot below (Other Network). There is no requirement to use these host names, as you can easily edit any of the host names. This does make filling in the configuration details easier and less prone to "fat fingers" if you are OK with these host names. Here is a full list of the automatically filled in host names. 1 2 1-vip 2-vip -scan 1-ilom 2-ilom 1-net1 2-net1 1-net2 2-net2 1-net3 2-net3     Another auto-fill feature of the Appliance Manager GUI follows a common practice of deploying IP Addresses for a RAC cluster in sequential order. In the screen shot below, I entered the first IP address (Node1-IP), then hit Tab to move to the next field. As a result, the next 5 IP address fields were automatically filled in with the next 5 IP addresses sequentially from the first one I entered. As with the host names, these are not required, and can be changed to whatever your IP address values are. One note of caution though, if the first IP Address field (Node1-IP) is filled out and you click in that field and back out, the following 5 IP addresses will be set to the sequential default. If you don't use the sequential IP addresses, pay attention to where you click that mouse. :-)     In the screen shot below, by entering the netmask value in the Netmask field, in this case 255.255.255.0, the gateway value was auto-filled into the Gateway field, based on the IP addresses and netmask previously entered. As always, you can change this value.     My last 2 screen shots illustrate that the same sequential IP address autofill and netmask to gateway autofill works when entering the IP configuration details for the Integrated Lights Out Manager (ILOM) for both nodes. The time these auto-fill capabilities save in entering data is nice, but from my perspective not as important as the opportunity to avoid data entry errors. In my next post in this series, I will touch on the benefit of using the network validation capability of the Appliance Manager GUI prior to deploying an Oracle Database Appliance.

    Read the article

  • Some Oracle VM 3 updates

    - by wcoekaer
    Today we did another patch set update for Oracle VM 3 (3.0.3-build 227). This can be downloaded from My Oracle Support as patch ID 14736185. There are quite a few updates in here and I highly recommend any Oracle VM 3 customer or user to install this update. This patch can be installed on top of Oracle VM 3.0 versions 3.0.2 and 3.0.3. The patch is cumulative for 3.0.3. So if you already installed patch update 1 (3.0.3-150) then this will just be incremental on top of that and brings you to 3.0.3-build 227. There is a readme file which contains the patchlist in the patch info. The following patches are released on ULN for Oracle VM server 3.0 : initscripts-8.45.30-2.100.18.el5.x86_64 The inittab file and the /etc/init.d scripts. kernel-ovs-2.6.32.21-45.6.x86_64 The Linux kernel kernel-ovs-firmware-2.6.32.21-45.6.x86_64 Firmware files used by the Linux kernel osc-oracle-ocfs2-0.1.0-35.el5.noarch Oracle Storage Connect ocfs2 Plugin osc-plugin-manager-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Infrastructure osc-plugin-manager-devel-1.2.8-9.el5.3.noarch Oracle Storage Connect Plugin Development ovs-agent-3.0.3-41.6.x86_64 Agent for Oracle VM xen-4.0.0-81.el5.1.x86_64 Xen is a virtual machine monitor xen-devel-4.0.0-81.el5.1.x86_64 Development libraries for Xen tools xen-tools-4.0.0-81.el5.1.x86_64 Various tooling for the manipulation of Xen instances Errata emails will be sent in the next few days with details on the above updates. Or you will find them here. I also did an update of my Oracle VM utilities to 0.4.0. They are also available from My Oracle Support, patch ID 14736239. These utils can be unzipped and installed on the server running Oracle VM Manager. Typically in /u01/app/oracle/ovm-manager-3/ovm_utils. There is a set of man pages in /u01/app/oracle/ovm-manager-3/ovm_utils/man/man8. There now are 6 commands : ovm_vmcontrol : VM level operations ovm_servercontrol : server level operations ovm_vmdisks : virtual disk/physical location mapping for VM disks ovm_vmmessage : message passing utility between the manager and the VM tools (in the Oracle VM templates) ovm_repocontrol : repository level operations ovm_poolcontrol : pool level operations Some of the new changes : at a pool level, acknowledge events and cascade to servers and virtual machines with outstanding events at a pool level, do a rescan of the storage for fibrechannel/iscsi disks if you add new devices (it does this operation then on every running server) at a repository level, fixup a device if it had a failed create repository at a repository level, refresh the repository and this will update the free space in the UI for ocfs2 repositories at a server level, acknowledge server events and cascade to virtual machines if needed at a VM level, acknowledge VM events at a VM level, bind vcpus to cores with vcpuset/vcpuget Please see the man pages and remember that these tools are just written As Is - no SRs... (per the documentation) Hopefully they are useful.

    Read the article

  • What is your strategy for converting RC builds into retail?

    - by Matthew PK
    We're trying to implement a strategy for how we transition our builds from RC to released retail code. When we label a build as a release candidate, we send it to QA for regression. If they approve it, that RC then becomes our released retail code. I liked the idea of "obvious" labeling of versions so that a user knows whether they have a beta or an RC or retail code... where you would have some obvious watermark in non-retail code (think Windows 7 where the RC or non-genuine builds watermark in the bottom right). ... but it seemed strange to us to manipulate the project (to remove the watermark) once it passed regression. If QA certified version a.b.c.d then our retail code should be that same version, not a.b.c.d+1 what strategies have you employed to clearly label non-release software versions without incrementing your build to disable the watermarks in your retail code? One idea I've considered is writing your build to look for a signed file in the installer archive... non-release code wouldn't include this file and so the app would know to display a watermark. But even this seems like QA is then working with non-release code. Ideas?

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • Cannot read from 2nd SATA data drive connected via SATA docking station

    - by Robbo
    Installed 10.10 this week on dual boot system. Everything else works fine but cannot read from 2nd SATA drive with all my data. Same drive works normally when booted to Windows XP. Interesting part is that I can see the drive in Ubuntu Disk Manager, can read all its attributes, can test it, shows up in Disk Manager, Storage Device Manager and Mount Manager, and can mount it, even change attributes; it appears healthy but does not show up in "Computer" or anywhere else that it can be accessed. The drive is connected via an external e-SATA docking station which is connected to a SATA port on the motherboard.

    Read the article

  • Not able to apt-get update from terminal, what to do now?

    - by Utkarsh
    Whenever I try to update from terminal, I get this error: root@Utkarsh[utkarsh]#apt-get update Hit http://packages.bosslinux.in anokha Release.gpg Hit http://packages.bosslinux.in anokha Release Hit http://packages.bosslinux.in anokha/contrib Sources Hit http://packages.bosslinux.in anokha/non-free Sources Hit http://packages.bosslinux.in anokha/main Sources Hit http://packages.bosslinux.in anokha/contrib i386 Packages Hit http://packages.bosslinux.in anokha/non-free i386 Packages Hit http://packages.bosslinux.in anokha/main i386 Packages Ign http://packages.bosslinux.in anokha/contrib Translation-en_US Ign http://packages.bosslinux.in anokha/contrib Translation-en Ign http://packages.bosslinux.in anokha/main Translation-en_US Ign http://packages.bosslinux.in anokha/main Translation-en Ign http://packages.bosslinux.in anokha/non-free Translation-en_US Ign http://packages.bosslinux.in anokha/non-free Translation-en Reading package lists... Done W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/main i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_main_binary-i386_Packages) W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/contrib i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_contrib_binary-i386_Packages) W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/non-free i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_non-free_binary-i386_Packages) W: You may want to run apt-get update to correct these problems

    Read the article

  • this error appeared when upgrating 12.04 LTS to 12.10 [closed]

    - by habcity
    Possible Duplicate: How do I fix a “Problem with MergeList” error when trying to do an update? ryder@ryder-Q1500M:~$ do-release-upgrade Checking for a new Ubuntu release Get:1 Upgrade tool signature [198 B] Get:2 Upgrade tool [1,200 kB] Fetched 1,200 kB in 6s (6,988 B/s) authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' extracting 'quantal.tar.gz' [sudo] password for ryder: Reading cache A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade has aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/update-manager-63XThv/quantal", line 10, in sys.exit(main()) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 237, in main save_system_state(logdir) File "/tmp/update-manager-63XThv/DistUpgrade/DistUpgradeMain.py", line 130, in save_system_state scrub_sources=True) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 146, in save_state self._write_state_installed_pkgs(sourcedir, tar) File "/tmp/update-manager-63XThv/DistUpgrade/apt_clone.py", line 173, in _write_state_installed_pkgs cache = self._cache_cls(rootdir=sourcedir) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in init self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 145, in open self._cache = apt_pkg.Cache(progress) SystemError: E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en, E:The package lists or status file could not be parsed or opened.

    Read the article

  • New Java ME security app, Rapid Tracker, is now full version

    - by hinkmond
    Rapid Protect has updated it's Java ME security app to be the full version now instead of a dumbed down version that ran on feature phones. Now, that's progress! See: Full Rapid Tracker on Java ME Here's a quote: Rapid Protect, a leading company focused on mobile based safety, security and collaboration space announces major feature enhancements to its award winning "Rapid Tracker" mobile applications. In addition to many new features, it announced availability of Full Rapid Tracker application on J2ME non-smart feature phones. Hmmm... "on J2ME non-smart feature phones". I wonder if by "non-smart" they mean another word... Perhaps, "non-iDrone-Anphoid"? Hinkmond

    Read the article

  • Virtualization in Ubuntu 11.10

    - by Mascarpone
    Since Ubuntu 11.10 use a new kernel, it's very difficult to have a decent support for virtualization. VirtualBox doesn't support guest additions for ubuntu 11.10, so I can't copy to and from my ubuntu desktop and windows, which I absolutely require, plus FreeBSD seems not to be able to use DHCP without guest additions. Virt-manager instead gives an error on launch: Unable to open a connection to the libvirt management daemon. Libvirt URI is: qemu:///system Verify that: - The 'libvirt-bin' package is installed - The 'libvirtd' daemon has been started - You are member of the 'libvirtd' group unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 1146, in _open_thread self.vmm = self._try_open() File "/usr/share/virt-manager/virtManager/connection.py", line 1130, in _try_open flags) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 102, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied The problem is solved by running virt-manager as root, but I don't like that. How do I change permissions to run Virt-Manager as user? Is there a way to install guest additions on Ubuntu 11.10?

    Read the article

  • 12 Steps to NTFS Shared Folders in Windows Server 2012

    - by KeithMayer
    In the past, managing and sharing NTFS folders could be a real ordeal – there were different tools for managing NTFS permissions vs shared folders and most IT Pros generally used these tools on a server-by-server basis from each server’s console. Server Manager to the rescue! In Windows Server 2012, Server Manager provides a management facelift on top of the disconnected process that we’ve used in the past for sharing folders and setting NTFS permissions. In addition, Server Manager can

    Read the article

  • Where to find URLs for sources.list for debian for running apt-get update?

    - by Boda Cydo
    Can anyone tell me where to find URLs to put in /etc/apt/sources.list for debian so that I could run apt-get update? I couldn't find the precise answer by searching Google. When I currently try running apt-get update I get: W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/contrib/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] W: Failed to fetch ftp://ftp.debian.org/debian/dists/lenny/non-free/binary-i386/Packages Unable to fetch file, server said 'Failed to open file. ' [IP: 130.89.148.12 21] I have no idea how to solve this. Here is how my current sources.list looks like: deb ftp://ftp.debian.org/debian lenny main contrib non-free deb-src ftp://ftp.debian.org/debian lenny main contrib non-free deb ftp://ftp.debian.org/debian lenny/updates main contrib non-free deb-src ftp://ftp.debian.org/debian lenny/updates main contrib non-free I'm running debian_version 5.0.8: # cat /etc/debian_version 5.0.8 Thanks!

    Read the article

  • Nautilus left pane does not expand

    - by dn.usenet
    I would prefer the left pane of Nautilus to behave like Windows file manager. It should have expandable/collapsible trees, and if I have /home/mydir-1, /home/mydir-2, I should be able to see them both in the left pane. When I click on one of them, the files in that dir should show in the right pane. If Nautilus can't do it, please suggest a better file-manager which does. I would rather not open 3 panes in Nautilus to do what two panes do just fine in Windows File Manager. Secondly how can I open two instances of Nautilus? And if it isn't possible with Nautilus, could it be done with some other file manager?

    Read the article

  • Nullable types and ?? operator C# [en-US]

    - by ruimachado
    Nullable types vs Non-nullable types   While developing our C# projects its frequent the null comparison operation to avoid null exceptions. This simple operation is mainly coded using the "var x = null" code example inside an if clause. However not all types of variables are nullable, which means that setting a variable to null is not allowed in every cases, it depends on what kind of type are you defining. But what if there was an extension to your non-nullable type that would convert your variable types to nullable? This extension really exists. As I said before in C# you have nullable types which represent all the values of an underlying type, and an additional null value and can be declared easily using "T?", where T is the type of the variable and for example the normal int type cannot be null, so its a non-nullable type, however if you define a "int?" your variable can be null, what you do is convert a non-nullable type to a nullable type. Example: int x=null;     Not allowed     int? x=null;   Allowed     While using nullable types you can check if a variable is null the same way you do it with nullable types:     But what about setting a default value when a certain variable is null?   In this cases the c# .net framework let you set a default value when you try to assign a nullable type to a non-nullable type, using the ?? operator. If you don't use this operator you can still catch the InvalidOperationException which is throw in this cases. For example  without the ?? operator :     Using the ?? operator your code becomes cleaner and more easy to read and you get a bonus, you can set a default value for multiple variables using the ?? in a chain set.     That’s it,   Thanks, Rui Machado rpmachado.wordpress.com

    Read the article

  • How can I get my battery level to display in the notification area (system tray)?

    - by Jon
    I'm trying to use the Awesome window manager with GNOME, i.e. running gnome-session --session=ubuntu on login, and it works great for the most part, except for the fact that the notification area/systray is missing a battery indicator. There's the Network Manager applet (nm-applet), a keyboard icon for switching keyboard layouts, but no battery icon as I would've hoped. I thought the command would be something like gnome-power-manager,

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >