Search Results

Search found 18702 results on 749 pages for 'digital input'.

Page 478/749 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • DVI to HDMI - displaying bad.

    - by TutorialPoint
    I currently have a computer with a ASUS EAH5770 (ATi Radeon HD 5770) 1GB GDDR5 video card, and 4GB ram, 2.6 GHZ i5 processor. I just switched from a DVI (the blue one) to an HDMI cable. (Bigben Flat cable HDMI 1.3c) I use a Samsung SyncMaster 2032 MW, which has a HDMI input. The weird thing is, that my screen is off-the-corner of the tv (so it's too wide) (1920x1080), and windows icons and text are not displaying well, though 1080p videos in YouTube are looking brilliant, just like pictures. So i think it has something to do with Windows. I already have the ATI Catalyst Control Center with the drivers i received with my video card. I do not currently know how to fix these problems. Do i have to reinstall Windows or so? Or is it (hopefully) easier?

    Read the article

  • NVIDIA X Server TwinView isses with 2 GeForce 8600GT cards

    - by Big Daddy
    Full disclosure, I am relatively new to Linux and loving the learning curve but I am undoubtedly capable of ignorant mistakes I have a rig I am building for my home office desktop. The basics: Gigabyte MB AMD 64bt processor Ubuntu 12.04 64bit two Nvidia GeForce 8600GT video cards using a SLI bridge two 22" DVI input HP monitors So here is my issue (it is driving me nuts) with X Server. If I plug both monitors into GPU 0 X Server auto configures TwinView and all is grand, works like a charm, though both are running off of GPU 0. If I plug one monitor into GPU 0 and one monitor into GPU 1, X Server enables the monitor on GPU 0, sees but keeps the monitor on GPU 1 disabled. My presumption (we all know the saying about presumptions) is that all I would have to do is select the disabled monitor on GPU 1 and drop down the Configure pull down and select TwinView...problem is when the monitors are plugged in this way, the TwinView option is greyed out and can not be selected. What am I not understanding here? Is there some sort of configuration I need to do elsewhere for Ubuntu to utilize both GPU's? Any help will be most appreciated, thanks in advance.

    Read the article

  • Application toolkits like QT versus traditional game/multimedia libraries like SFML

    - by Aaron
    I currently intend to use SFML for my next game project. I'll need a substantial GUI though (RPG/strategy-type) so I'll either have to implement my own or try to find an appropriate third party library, which seem to boil down to CEGUI, libRocket, and GWEN. At the same time, I do not anticipate doing that many advanced graphical effects. My game will be 2D and primarily sprite-based with some sprite animations. I've recently discovered that QT applications can have their appearance styled so that they don't have to look like plain OS apps. Given that, I am beginning to consider QT a valid alternative to SFML. I wouldn't have to implement the GUI functionality I'd need, and I may not be taking advantage of SFML's lower-level access anyway. The only drawbacks I can think of immediately are the learning curve for QT and figuring out how to fit game logic inside such a framework after getting used to the input/update/render loop of traditional game libraries. When would an application toolkit like QT be more appropriate for a game than a traditional game or multimedia library like SFML?

    Read the article

  • Standard for feeding test data to a Nagios plugin?

    - by chiborg
    I'm developing a Nagios plugin in Perl (no Nagios::Plugin, just plain Perl). The error condition I'm checking for normally comes from a command output, called inside the plugin. However, it would be very inconvenient to create the error condition, so I'm looking for a way to feed test output to the plugin to see if it works correctly. The easiest way I found at the moment would be with a command line option to optionally read input from a file instead of calling the command. if($opt_f) { open(FILE, $opt_f); @output = <FILE>; close FILE; } else { @output = `my_command`; } Are there other, better ways to do this?

    Read the article

  • Can I increase the link speed of the RAS Server on our MS Win2k3 box?

    - by Ducain
    We are running a Win2K3 Server box, and I'm a remote employee that connects via VPN. I've been frustrated for some time by the connection speed over the VPN (the office HQ has a decent speed and I have a biz class connection here), and decided to do some checking today. This morning, I was dialed in and looked at the networking tab of the task manager, and I see that the adapter for the RAS Server (the box has 4 Gigabit adapters) has a speed that seems far too low. The speed for the RAS Server link hovers between 300 - 600 Kbps. The local connection (and others) all say 1 Gbps. Can I set this to a higher speed? Is this information accurate? Thanks for the input.

    Read the article

  • Install on Acer Aspire 4752

    - by user216962
    I am at my wits end with this computer. I bought and Acer Aspire 4752 with a fully loaded version of Windows 7 on it. I prefer Ubuntu so I began to install 14.04 from USB. Got the error: [Errno 5] Input/output error This is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. So I tried a different USB stick, same error. Tried different versions of Ubuntu, got the same error. I've used startup disk creator and Unetbootin to make start USB boot devices. I can boot with the USB drive and run Ubuntu that way. I even checked the hard drive using the tools in Ubuntu. Everything was fine, except it said the hard drive was hot. I tried a different hard drive. Got same error above. I ran a test with mem86, everything was fine. No matter what I do, using the USB gives me the Errno5 error. I then switched to using DVDs. Now I keep getting an uncompression error when installing Ubuntu 14.04 or 12.04. I can't figure out for the life of me why I get nothing but errors. Can anyone help?

    Read the article

  • Quickly translate a word from English

    - by licorna
    I'm always reading English, but I'm a Spanish native speaker (I'm working on my thesis). Sometimes I need to translate a word into Spanish, and what I do now is to open a new tab and go to Google Translate and then put the word into the input field. Just a quick translation, one word or a small phrase. I'm a Mac and Firefox user. Is there a better way to achieve this? I was thinking that maybe a dashboard widget would do the trick and I was looking for one. The other option is to install the Google Toolbar, but I really hate toolbars. I don't know, a good Firefox extension maybe?

    Read the article

  • TransformXml Task locks config file identified in Source attribute

    - by alexhildyard
    As background: the TransformXml MSBuild task is typically invoked in a custom build step to mark up a web.config file with per-environment configuration; its flexible directives offer highly granular control over the insertion, removal, substitution and transformation of existing configuration hierarchies. For those using the TransformXML task (typically in a Web Deployment Project) I raised an issue against Visual Studio 2010, in which the file handle on the input file was not released, meaning that following transformation the source file remained locked. As a result, the best way to transform a file was first to rename it, transform it, and then copy it back, leaving the "locked" file to be freed up later.I just heard today that this has now been resolved in Visual Studio 2012 RTM. That's good news, because Web Config Transformations offer a lot. An intelligent, automated build process will swap in the relevant transform(s), making it much easier to synthesise the Developer and Build server builds. This makes for a simpler and more exemplary build process, and with the tighter coupling comes a correspondingly quicker response to Developmental change.Oh, and don't forget -- it isn't just web.configs you can transform. You can transform app.configs, or indeed any XML file that honours the task's schema and hierarchical rules.

    Read the article

  • block access to wrt from vlan using iptables dd-wrt

    - by NitroxDM
    I set up multiple isolated vlans in dd-wrt. Now I need to forward a port to vlan2. I isolated the vlans using: iptables -I FORWARD -i br0 -o vlan2 -j DROP iptables -I FORWARD -i br0 -o vlan3 -j DROP iptables -I FORWARD -i br0 -o vlan4 -j DROP Now I need to block a clients on each vlan from accessing the router. This doesn't work: iptables -I INPUT -i br0 -o vlan2 --dport telnet -j REJECT --reject-with tcp-reset I'm new it iptables... am I missing something?

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Is it possible to make Ctrl+C as responsive as Ctrl+Break in the Windows 7 console?

    - by Peter Graham
    Is it possible to make Ctrl+C act like Ctrl+Break in the Windows 7 cmd.exe console? By default Ctrl+C seems to only send a signal the next time the input buffer is read, where Ctrl+Break sends a signal immediately. This makes Ctrl+C useless for ending processes because when I want to end a process I want to end it immediately. I'm using Ctrl+Break for now but it's far harder to type. It looks like in DOS you can add BREAK=ON to CONFIG.SYS to achieve this, but not in Windows 7?

    Read the article

  • Dual Boot menu with Ubuntu and Windows 8 not showing up

    - by user180630
    I know a lot of posts have been written, and I had read most of them when I encountered the problem. None of them solved the problem. I have successfully installed Ubuntu 12.04 on top of Windows 8. Now my PC simply boots into Windows 8. If I press 'Esc' at start of BIOS, and then F9,the GRUB shows up and Ubuntu is listed at the top of the several options to boot from. I did run Boot-Repair once I logged into Ubuntu explicitly from GRUB as mentioned above. I did all said by Stormvirux in this link but was still unsuccessful. The debug info is listed here. Something which confuses me is the message which Boot-Repair stated after it did its job. You can now reboot your computer. Please do not forget to make your BIOS boot on sda (8004MB) disk! The boot files of [The OS now in use - Ubuntu 12.04.2 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, 200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. (https://help.ubuntu.com/community/BootPartition) I don't know why it says it is far from the start of the disk as I see it first in the GRUB menu which comes up at startup. One more input, when I try to place the GRUB in sda, Boot-Repair does not progress giving me the following error: GPT detected. Please create a BIOS-Boot partition (>1MB, unformatted filesystem, bios_grub flag). This can be performed via tools such as Gparted. Then try again. Alternatively, you can retry after activating the [Separate /boot/efi partition:] option. I had to select Separate /boot/efi partition: sdb2

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • Retrieve malicious IP addresses from Apache logs and block them with iptables

    - by Gabriel Talavera
    Im trying to keep away some attackers that try to exploit XSS vulnerabilities from my website, I have found that most of the malicious attempts start with a classic "alert(document.cookie);\" test. The site is not vulnerable to XSS but I want to block the offending IP addresses before they found a real vulnerability, also, to keep the logs clean. My first thought is to have a script constantly checking in the Apache logs all IP addresses that start with that probe and send those addresses to an iptables drop rule. With something like this: cat /var/log/httpd/-access_log | grep "alert(document.cookie);" | awk '{print $1}' | uniq Why would be an effective way to send the output of that command to iptables? Thanks in advance for any input!

    Read the article

  • Pulling in changes from a forked repo without a request on GitHub?

    - by Alec
    I'm new to the social coding community and don't know how to proceed properly in this situation: I've created a GitHub Repository a couple weeks ago. Someone forked the project and has made some small changes that have been on my to-do. I'm thrilled someone forked my project and took the time to add to it. I'd like to pull the changes into my own code, but have a couple of concerns. 1) I don't know how to pull in the changes via git from a forked repo. My understanding is that there is an easy way to merge the changes via a pull request, but it appears as though the forker has to issue that request? 2) Is it acceptable to pull in changes without a pull request? This relates to the first one. I'd put the code aside for a couple of weeks and come back to find that what I was going to work on next was done by someone else, and don't want to just copy their code without giving them credit in some way. Shouldn't there be a to pull the changes in even if they don't explicitly ask you to? What's the etiquette here I may be over thinking this, but thanks for your input in advance. I'm pretty new to the hacker community, but I want to do what I can to contribute!

    Read the article

  • Disappearring instances of VertexPositionColor using MonoGame

    - by Rosko
    I am a complete beginner in graphics developing with XNA/Monogame. Started my own project using Monogame 3.0 for WinRT. I have this unexplainable issue that some of the vertices disappear while doing some updates on them. Basically, it is a game with balls who collide with the walls and with each other and in certain conditions they explode. When they explode they disappear. Here is a video demonstrating the issue. I used wireframes so that it is easier to see how vertices are missing. The perfect exploding balls are the ones which are result by user input with mouse clicking. Thanks for the help. The situations is: I draw user primitives with triangle strips using like this graphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, circleVertices, 0, primitiveCount); All of the primitives are in the z-plane (z = 0), I thought that it is the culling in action. I tried setting the culling mode to none but it did not help. Here is the code responsible for the explosion private void Explode(GameTime gameTime, ref List<Circle> circles) { if (this.isExploding) { for (int i = 0; i < this.circleVertices.Length; i++) { if (this.circleVertices[i] != this.circleCenter) { if (Vector3.Distance(this.circleVertices[i].Position, this.circleCenter.Position) < this.explosionRadius * precisionCoefficient) { var explosionVector = this.circleVertices[i].Position - this.circleCenter.Position; explosionVector.Normalize(); explosionVector *= explosionSpeed; circleVertices[i].Position += explosionVector * (float)gameTime.ElapsedGameTime.TotalSeconds; } else { circles.Remove(this); } } } } } I'd be really greatful if anyone has suggestions about how to fix this issue.

    Read the article

  • How to print lots of Envelopes.

    - by Ian Ringrose
    How to print lots of Envelopes. I wish to print onto lots of envelopes: A few thousand as a one of task (may repeat each year) Good quality colour photo and graphics that has solid colour (plus some black text) Be painless to operate, I don’t wish to have to clear jams often As I need to put the envelopes into batches, I don’t need an input try that will take hundreds of envelopes. I also need to get a new colour printer to print normal A4 paper; this may or may not be the same printer. I live in the UK, if you care. What should I be looking at?

    Read the article

  • Amazon EC2 - Unable to connect to MySQL

    - by alexus
    I'm having issue connecting from one VM to another # nmap -p3306 ip-XX-XX-XX-XX.ec2.internal Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-10 17:50 EDT Nmap scan report for ip-XX-XX-XX-XX.ec2.internal (XX.XX.XX.XX) Host is up (0.000033s latency). PORT STATE SERVICE 3306/tcp closed mysql Nmap done: 1 IP address (1 host up) scanned in 1.05 seconds # in my Security Group I allowed Inbound connectivity via port TCP, portrange 3306 and Source 0.0.0.0/0, so theoratically it should work, but in reality it doesn't( I'm running red hat enterprise linux 7 on both VMs. mariadb.service running fine on another VM and I am able to connect to it locally. DB's: # netstat -anp | grep 3306 tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 2324/mysqld # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # Any ideas what else I missed?

    Read the article

  • Snow Leopard Hangs at Login Window

    - by jessecurry
    I've had an issue for the past few months, but I rarely restart so it hasn't caused too much trouble. Basically, when I start up my Mac (iMac10,1 - 3.06GHz Intel Core 2 Duo, OS X 10.6.3) everything proceeds as usual until I reach the login window. The login window displays normally, but keyboard and mouse input seem to be ignored. This condition persists for around 5 minutes at which time everything goes back to normal. While the login window is frozen my second monitor appears entirely blue, the second monitor receives a background as soon as the login window becomes responsive. If I startup while holding SHIFT the problem still occurs, but the freeze is much shorter. Looking through my logs I see no activity during the time that the login window is frozen. I've attempted to repair disk permissions, and gone through every possible maintenance option in Cocktail.

    Read the article

  • Default audio device gives an error on WINDOWS 7 (x64) when triing to run VLC from CMD (VideoLAN, VL

    - by Ole Jak
    I use WINDOWS 7 (x64) (Russian) I want to stream life audio from my default audio capture device (microphone) When I set up VLM settings using visual enviroment instruments - VLM settings it all works fine. But when I export created settings/configuration *.vlm file and try to inport it into VLM it gives me nothing I opened that .vlm there is some text... so now I try to run VLC with default settings like this: vlc -i dshow:// --dshow-adev= :sout=#transcode{acodec=mp3,ab=128,channels=2,samplerate=44100}:std{access=http,mux=raw,dst=127.0.0.1:8084} but it dies giving me errors...=( So what shall I do to do live MP3 streaming from my default audio input device using VLC in non UI mode?

    Read the article

  • Outlook 2010 sharing calendars and updating

    - by Stef
    We recently purchased Outlook 2010 for our law firm. We have several secretaries that need to view all calendars and input information in each of them. We do not have Exchange. I did a test by creating a calendar on one computer and sharing it with another computer. On the second computer I created a calendar and shared it with the first computer. Everything seems to work, except when I put events on computer 1 calendar and look at computer 1 calendar on computer 2. I can not see anything. Any advice would help. Thanks

    Read the article

  • Unable to diagnose Windows 7 lockup

    - by Delyan
    Basic info: Laptop Dell Studio XPS 13 (Intel P9600, 4GB RAM, NVidia 9400M card, 256G Samsung PM800 SSD) Windows 7 Ultimate, as well as Fedora 14 Here's the deal - Windows would just lock up out of nowhere, no log entries, no dumps, no BSOD, it just freezes. This happens mostly when idle (but it happened when I was using it too) and does not follow a concrete time frame. No input is accepted - only solution is to hold the power button. Although this sounds like a clean cut hardware issue, the reason I'm willing to rule this out is that my primary OS is Fedora 14. It's been working fine for the past 2 years and I've been stress testing the hardware (intentionally or not) every once in a while with no issues. I would like to ask if there's any way to get a diagnostic output from Windows in a situation such as this. The next step in my testing is to leave it in Safe Mode overnight and see if it locks up but even if I do that, I still need to figure out what component freezes it up during normal operation.

    Read the article

  • Is individual code ownership important?

    - by Jim Puls
    I'm in the midst of an argument with some coworkers over whether team ownership of the entire codebase is better than individual ownership of components of it. I'm a huge proponent of assigning every member of the team a roughly equal share of the codebase. It lets people take pride in their creation, gives the bug screeners an obvious first place to assign incoming tickets, and helps to alleviate "broken window syndrome". It also concentrates knowledge of specific functionality with one (or two) team members making bug fixes much easier. Most of all, it puts the final say on major decisions with one person who has a lot of input instead of with a committee. I'm not advocating for requiring permission if somebody else wants to change your code; maybe have the code review always be to the owner, sure. Nor am I suggesting building knowledge silos: there should be nothing exclusive about this ownership. But when suggesting this to my coworkers, I got a ton of pushback, certainly much more than I expected. So I ask the community: what are your opinions on working with a team on a large codebase? Is there something I'm missing about vigilantly maintaining collective ownership?

    Read the article

  • Filtering serial port io

    - by mr odus
    I am currently working on a project where I communicate with hardware via a com port on its respective pc(win xp or 7) It is a fairly large project and sifting through the log file can be a bit of pain. This is my current setup. I use putty to do the actual serial communication, and write it to a log file. Then using MinGW Msys I filter it using tail -f "puttyLog" | grep -i "search term" Is there a better way to do this? I mean specifically filtering the input in realtime. Not that mine is terrible, but it still involves having to read from a log and sometimes there have been hangups where it will be delayed for a minute or 2. I have used software in the past with a main io window and then internal filter panels, but can no longer remember or find it.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >