Search Results

Search found 4360 results on 175 pages for 'dual licensing'.

Page 153/175 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • How to eliminate overscan on Ubuntu HTPC

    - by Norman Ramsey
    I'm using an Ubuntu box with Nvidia graphics card as an HTPC. My HDTV is a Sony Bravia KDS-R50XBR1; this is a rear-projection unit with many inputs. I am using the HDMI input. I'm using the proprietary Nvidia drivers and they recognize the 1080x1920 resolution just fine. The display is a little fuzzy but at 50 inches it's perfect for movies. My problem is that the TV has three overscan settings, and none of them reduces overscan to zero. When I was using dual-screen this was fine, but I'm moving to where the TV is my only screen, and the Gnome panels are not visible because of the overscan. I'd like to figure out how to eliminate the overscan, without trying to scale my 1080p content down to 920p or something ridiculous like that. Ideally there would be some scurvy trick, perhaps involving the TV service menu, to get rid of the overscan on the TV side. Or I could move the Gnome panels, but I still would be missing the edges of my movies. Suggestions most welcome.

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • How to connect through a proxy using Remote Desktop?

    - by scottmarlowe
    So I've got a home server running Windows Server 2003. I use a dual network card setup and Routing and Remote Access to link the internal, private network to the external connection. The external connection hooks directly to my cable modem (so no routers or other devices sitting between). The problem I'm having is that I can't connect remotely from a location outside the house (so connecting to the server's external connection) to the server using either Remote Desktop or VNC. I have enabled both ports in Routing and Remote Access's firewall to allow access, and I have enabled Remote Desktop in Windows Server 2003. The odd thing is that I can access my home server's SVN repository and I can even ping the server's IP. I am using the IP to attempt to connect, though I use a dyndns.com provided name to connect to my SVN repository, so it shouldn't make a difference (I know the IP is getting resolved correctly). Any ideas on where to start diagnosing this one? I haven't seen anything in my server's event log. If any other info is needed, let me know. Thanks. UPDATE: One last piece of information: We use a proxy server at work, which I'm nearly 100% sure is the culprit. I have a workaround--if I connect to our VPN (even though I'm already inside the building) I am able to connect to my home server. This is with VNC. However, is there a way to connect through a proxy using Remote Desktop? ONE MORE UPDATE: Indeed, it was the http proxy I'm sitting behind at work that was causing the issue. An acceptable workaround is to use my VPN connection to bypass the proxy, and I'm in!

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

  • help with xorg.conf: xrandr on one of two widescreen monitors; rhel5, kde, ATI Radeon X1300

    - by user35997
    Can anyone help with me configure my dual-screen monitors for rotation? I have xrandr 1.1. Have tried various approaches, nothing takes. I can't even get the xrandr options to show up in KDE's Display control panel. Thanks1 My lspci output: 03:00.0 VGA compatible controller: ATI Technologies Inc RV516 [Radeon X1300/X1550 Series] My current xorg.conf (works, minus screen rotation): # Xorg configuration created by system-config-display Section "ServerLayout" Identifier "Multihead layout" Screen 0 "aticonfig-Screen[0]" 0 0 InputDevice "Keyboard0" "CoreKeyboard" Option "Xinerama" "off" Option "Clone" "on" EndSection Section "Files" EndSection Section "Module" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbModel" "pc105" Option "XkbLayout" "us" EndSection Section "Monitor" ### Comment all HorizSync and VertSync values to use DDC: Identifier "Monitor1" VendorName "Monitor Vendor" ModelName "Dell 2407WFP (Digital)" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 76.0 Option "dpms" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "Videocard0" Driver "vesa" EndSection Section "Device" Identifier "Videocard1" Driver "vesa" VendorName "Videocard Vendor" BoardName "ATI Technologies Inc RV516 [Radeon X1300/X1550 Series]" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "aticonfig-Device[0]" Driver "fglrx" Option "DesktopSetup" "horizontal" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Videocard1" Monitor "Monitor1" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1920x1200" "1280x1024" "800x600" EndSubSection EndSection Section "Screen" Identifier "aticonfig-Screen[0]" Device "aticonfig-Device[0]" Monitor "aticonfig-Monitor[0]" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1920x1200" "1280x1024" "800x600" EndSubSection EndSection

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • Configuring WPA WiFi in Ubuntu 10.10

    - by sma
    I am trying to configure my wireless network on my laptop running Ubuntu 10.10 and am having a bit of difficulty. I am a complete Linux newb, but want to learn it, hence the reason I'm trying to set this up. Here's the vitals: It is a Gateway 600 YG2 laptop. It was previously running Windows XP, but I installed Ubuntu 10.10 in place of it (not a dual boot, I removed XP altogether). I have an old wireless card that I'm trying to resurrect. I haven't really used the card in a couple years, but it seems to still work, I just can't connect to my home's wireless network. The card is a Linksys WPC11 v2.5. When I plug it in, Ubuntu recognizes the network, but won't connect to it. My home network uses WPA encryption and the only connection type that Ubuntu's network manager is giving me is WEP and then it asks for a key -- I have no idea what that key should be. So, basically, I'm asking, is there a way I can instead connect through WPA? I've tried creating a new connection in network manager, but that won't work, it keeps falling back to the WEP connection and asking me for a key. I have tried to install the XP driver using ndiswrapper but I don't know if that's working or not. Is there a way to tell if: A) the card is working as it should B) the correct drivers are installed (again, I installed the XP one using ndiswrapper NET8180.INF, but I'm not sure what to do next) Any help would be appreciated. Thank you.

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • LCD monitor flicker when connected to a laptop using VGA

    - by Björn Lindqvist
    I have a dual screen setup with two AOC e2450Sw monitors connected to a laptop. The laptop has one HDMI and one VGA output. When one of the monitors is connected using VGA, it flickers or displays static noise. The flickering is fairly subtle and only visible on darker colors. But it is there and noticable and appears like horizontal lines. The problem only appears on the monitor connected to the laptop using the VGA cable. If I swap the monitors, the one connected using VGA is displaying the flicker but not the one connected using HDMI. The simple solution would ofcourse be to connect both monitors using HDMI, but since the laptop only has one VGA and one HDMI out that isn't possible. I've tried tweaking the monitor setting using the OSD menu, but it had little or no effect. Update: After several more trouble shooting hours, it seems the problem is not related to the monitor or VGA cable as the problem persists even if I swap the display with another brand and different cables. So it may be the graphics card? Intel HD Graphics 4000. The laptop is Acer Aspire E1-571.

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • Ubuntu 10.04 Keyboard and Mouse Freezing Problem

    - by nitbuntu
    I had a partition setup with Windows XP and Ubuntu 8.04 dual booting. I recently upgraded to Ubuntu 10.04 by installing fresh from CD but leaving the previous /home folder as is. Things seemed to be working fine, but started finding that my mouse and keyboard were freezing. After a quick search on the internet, I found the following suggestions as shown here:- Ubuntu Forums Here the suggestion was to:- Edit /etc/default/grub, go to the line that begins like: GRUB_CMDLINE_LINUX_DEFAULT= Change it to: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi=off" After that, run this command: sudo update-grub and Reboot This seemed to have resolved the issue but after a couple of days I again find my mouse and keyboard freezing. I also find that my parallel port printer had also stopped working. I have saved the output of dmesg and my syslog. The first can be viewed here but the syslog had too many characters, so if someone can suggest an alternative to freetexthost, I can post it there. Moreover, if there is any other information that should be provided, do let me know. I do hope we can get to the bottom of this issue. Thank you in advance for any help that could be provided.

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • Answers to “What source control system do you use?” (and some winners)

    - by jamiet
    About a month ago I posed a question here on my blog SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff) in which I asked SQL Server developers to answer the following questions: Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? I had some really great responses (I highly recommend going and reading them). I promised that the five best, most thought-provoking, responses (as determined by me) would win one of five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect; here are the five that I chose (note that if you responded but did not leave a means of getting in touch then you weren’t considered for one of the prizes – sorry): In general, I don't think the management overhead and licensing cost associated with TFS is worthwhile if all you're doing is using source control. To get value from TFS, at a minimum you need to be using team build, and possibly other stuff as well, such as the sharepoint integration. If that's all you need, then svn with Tortoise would be my first choice. If you want to add build automation later, you can do this with cruisecontrol (is it still called that?), JetBrains, etc. For a long time I thought that Redgate's claims about "bridging the SSMS-VS divide" were a load of hot air, since in my experience anyone who knew what they were doing was using Visual Studio, in particular SSDT and its predecessors. However, on a recent client I was putting in source control for the first time, and I discovered that the "divide" really does exist. That client has ended up using svn with Redgate SQL Source Control, with no build automation, but with scope to add it in the future. Gavin Campbell I think putting the DB under source control is a great idea.  I have issues with the earlier versions of SQL Source Control in that it provides little help in versioning the DB. I think the latest version merges SQL Compare and SQL Source Control together.  Which is how it should have been all along. Sure I have the DB scripts in SVN, but I can't automate DB builds and changes without more tools.  Frankly I'm surprised databases don't have some sort of versioning built into them. Nick Portelli Source control has been immensely useful and saved me from a lot of rework on more than one occasion.  I have learned that you have to be extremely careful checking in data.  Our system is internal only so during the system production run once a week, if there is a problem that I can fix easily(for example, a control table points to a file in the wrong environment), I'll do it directly in production so the run can continue as soon as possible since we have a specified time window.  We do full test runs to minimize this but it has come up once or twice.  We use Red-Gate source control to "push" from the test environment to the production environment.  There have been a couple of occasions where the test environment with the wrong setting was pushed back over the production environment because the change was made only in production.  Gotta keep an eye on that. Alan Dykes Goodness is it manual.  And can be extremely painful at times.  Not only are we running thin, we are constrained on the tools we can get ($$ must mean free).  Certainly no excuse, and a great opportunity to improve my skills by learning new things.  But...  Getting buy in a on a proven process or methodology is hard, takes time, and diverts us from development.  If SQL Source Control is easy to use and proven oh boy could you get some serious fans around here!  Seriously though, as the "accidental dba" of this shop any new ideas / easy to implement tools can make a world of difference in productivity and most importantly accuracy.  Manual = bad. :) John Hennesey (who left his email address) The one thing I would love to know more about is the unique challenges of working with databases as source code - you can store scripts, but are they written as deployment scripts with all the logic about how to apply them to an existing DB? Where is that baseline DB? Where's the data? How does a team share the data and the code? It's a real challenge. Merrill Aldrich Congratulations to the five of you. Red Gate will be in touch with you soon about your free licenses. Thank you to all those that responded. And again, go and check out all the responses – those above are only small proportion from what is a very interesting comment thread. @Jamiet

    Read the article

  • two computers on same network cannot ping eachother nor view NetBios resources

    - by slava
    I'd like to find out the problem of my network configuration I have network configuration is like in this diagram: The problem is between laptop1 and laptop2. At first I thought it was samba server problem. I was configuring samba server on one of the laptops and I wasn't able to access the shares from the second laptop no matter what I was doing. After installing/removing/configuring samba-server a couple of times I realized that the problem resides somewhere else. Laptop configurations: - Laptop1: ubuntu 12.04 - Laptop2: Windows 7/ ubuntu 12.04 ( dual boot ) - Server : ubuntu 12.04 When I do "ping 192.168.0.10" from laptop2, I get "Destination host unreachable". The same situation is when I ping in other direction. When I access Laptop1 shares from Laptop2, having windows 7 loaded, I get the error message: "Error code: 0x80070035 The network path was not found." When I ping "server" or "router" or "wifi router" from any of laptops I get a reply. The same with windows shares, I am able to access "server"s shares from Windows and Ubuntu, from any of my laptops. Netbios can't function correctly, that's obvious, I am unable to access windows shares between laptops. I assume that on "wifi router" is a miss-configuration, but I can't find what specifically. The "Wifi router" works as Hub + wifi, it is connected to "router" not in WAN port but in LAN1. Please, help me correctly configure the router to make them see each-other, or at least make NetBios work correctly, between laptops, to be able to access windows shares. Thanks!

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • vmware vmdk disk problem

    - by dmtr
    I have a VMware ESXi 4 server and 2 storage servers (mounted via nfs). Between the storage servers (Fedora 14) is a drbd cluster (dual primary) and ocfs2 filesystem; also every server has a local partition with an ext4 filesystem, both are mounted via nfs on the esxi server. When I tried to copy a virtual machine (naturally it was powered off) from the ext4 partition to the ocfs2 partition, the vmdk total file size is different, but the md5sum is the same. On the ext4 partition: # ls -la total 28492228 -rw------- 1 root root 42949672960 Jan 14 14:46 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 On the ocfs2 partition: # ls -la total 13974660 -rw------- 1 root root 42949672960 Jan 14 16:16 disk-flat.vmdk # md5sum disk-flat.vmdk 0eaebe3138beb32f54ea5de6dfe5a987 When I power on the virtual machine from the ocfs2 partition it dosn't work. I have a windows on the virtual machine and it freez?s after the windows logo. From the ext4 partition the virtual machine workes. I tested with linux (created and installed on ext4 partition and then copied to the ocfs2) and the same problem appears. When I create a virtual machine directly from ocfs2 partition, there are no problems. I tried to copy via vSphere client, and I have the same problem. Any suggestions?

    Read the article

  • How do I install a different OS on a Compaq Presario cq56 with preinstalled SuSE 11?

    - by McCoy
    Thing is, I don't have a clue of Linux systems, I usually use WinXP. Bought a notebook with SuSE 11 on it, because I have my XP licence and thought I could install that if I found the chipset drivers for the hardware (which I'm not completely sure I have the right versions of). Then I thought I'd give it a shot with the SuSE, looked nice enough. But I can't get my external hd to work (tried force mount) and the banshee doesn't do anything like playing video. Since that is one of the two main purposes of this notebook, I need to get that to work. Tried downloading VLC player, but that only works with SuSE 11.1 upwards. So I downloaded a SuSE 11.3 and burned the iso. But surprise, no way the notebook would boot from cd. Same with the XP cd (considered setting up a dual boot). And no, I can't get to BIOS to reset to default, either. So I can basically do nothing else than going online with this thing and that's not enough for me (gamer in withdrawal, yikes!). I need at least to get to my firefox profile on the external hd and be able to watch video. Can somebody please help me? I think at this point I'd prefer to install XP and MAYBE the SuSE 11.3 after that. I'm not a native speaker, so please speak plainly, thanks. :) Edit: if this is impossible, could someone please help me with the external hd mount and video playback? Edit: Found out how to boot from cd by now. But still no XP, because I get bluescreen after bluescreen while setup is loading files. I guess it's the missing SATA drivers...

    Read the article

  • Basic multicast network performance problems

    - by davedavedave
    I've been using mpong from 29west's mtools package to get some basic idea of multicast latency across various Cisco switches: 1Gb 2960G, 10Gb 4900M and 10Gb Nexus N5548P. The 1Gb is just for comparison. I have the following results for ~400 runs of mpong on each switch (sending 65536 "ping"-like messages to a receiver which then sends back -- all over multicast). Numbers are latencies measured in microseconds. Switch Average StdDev Min Max 2960 (1Gb) 109.68463 0.092816 109.4328 109.9464 4900M (10Gb) 705.52359 1.607976 703.7693 722.1514 NX 5548(10Gb) 58.563774 0.328242 57.77603 59.32207 The result for 4900M is very surprising. I've tried unicast ping and I see the 4900 has ~10us higher latency than the N5548P (average 73us vs 64us). Iperf (with no attempt to tune it) shows both 10Gb switches give me 9.4Gbps line speed. The two machines are connected to the same switch and we're not doing any multicast routing. OS is RHEL 6. 10Gb NICs are HP 10GbE PCI-E G2 Dual-port NICs (I believe they are rebranded Mellanox cards). The 4900 switch is used in a project with tight access control so I'm waiting for approval before I can access it and check the config. The other two I have full access to configure. I've looked at the Cisco document[2] detailing differences between NX-OS and IOS w.r.t multicast so I've got some ideas to try out but this isn't an area where I have much expertise. Does anyone have any idea what I should be looking at once I get access to the switch? [1] http://docwiki.cisco.com/wiki/Cisco_NX-OS/IOS_Multicast_Comparison

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Ubuntu 12.04 froze during update, won't boot

    - by Cichol
    I've recently installed Ubuntu 12.04 on my laptop, and every time i tried to update it, it would freeze for a few seconds, and tell me that the updates could not be downloaded. After many, many tries I managed to get them downloaded, but then in the middle of installing them, it froze. Completely. No mouse movement, no blinking lights, no nothing. After a few hours of letting it sit there, I finally hit the power button to do a hard reset, and now when I select Ubuntu on the boot screen (Dual-boot with Windows 7), I get a blank purple screen, and then nothing. Another freeze. I've tried getting into the console, but no command I input has any visible effect. I have a ton of music stored on the partition it's in, so I'd really rather not have to reinstall. My specs, to the best of my knowledge: Clevo Corp model B7130 (Sager custom) CPU: Intel Core i5 @ 2.53gHz (4 CPUs) Graphics Card: Nvidia GeForce 425m 4096 MB RAM Drivers: Whatever comes with the download of 12.04. As a side-note, I installed Ubuntu via the Windows Installer program (wubi). Does that make a difference?

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • Laptop will not boot

    - by WillumMaguire
    This is a dell studio 1558 laptop. Now, something is wrong with the charger that it won't charge the laptop, but the laptop can turn on and operate properly as long as it is attached. It has been like this for a while, but it's not the problem. My problem is that as of yesterday, It takes several minutes to get past the "dell" startup logo (where is says "f2 setup" and "f12 boot options"). After it gets past, it beeps as normal to tell me about the charger and gives me the f2/f12 options and f1 to continue as normal. I can press f12 to get into boot options and load into my live USB BackTrack 5 ISO, but after "startx" it just stays at a black screen. I can also access BIOS setup, but see nothing that would help the problem. When I boot to the HDD, it gives me this Intel UNDI, PXE-2.1 (build 083) Realktek PCIe GBE Family Controller Series V.2.29 (06/30/09) PXE-E61: Media test failure, check cable PXE-M0F: Exiting PXE ROM Operating System not found Also, pressing f8 gives me the same results as booting as normal. It is running Windows 7 Ultimate, dual-core Intel i3 @ 2.27ghz and 4gb RAM. I think there is an issue with the HDD, as the "Operating System not found" would lead me to believe. Is this a fixable problem?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >