Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 160/365 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Enumerating network shares with NetBIOS

    - by Karrax
    Hello, I have a case where I need to find all connectable shares on my network, and preferably as much information about the share possible. I could do this manually but its quite a big network and it would be too slow. If I did it manually Im guessing I would do something like net view net use //hostname <browse it manually> This would however not give me hidden shares so its not a viable option. Does anyone know of a tool which can help me out in this case? I already tried Sysinternals ShareEnum but it did not work properly. It did a half decent job, but it gave me access denied on tons of shares that was actually open. Any tips in how I can script this is also appriciated. Thank you

    Read the article

  • Where to download MS SQL Server 2005 Developer Edition?

    - by Mark
    Just got put in charge of a big web project. All I know is the web server is running MS SQL 2005, so I need something comparable to test locally. I figure developer edition is my best bet because it offers everything that the enterprise edition does, but is for development purposes only. But this page is pretty worthless http://www.microsoft.com/sqlserver/2005/en/us/developer.aspx Where do I actually download it? What about SQL 2005 Express? Would that meet my needs? I can't figure out all the differences between these stupid MS products.

    Read the article

  • When can an FTP server close its passive connections?

    - by Don Kirkby
    Does the FTP protocol allow the server to close any of its passive connections while the client is still connected? Can it tell when the client is finished receiving and then close the connection? I'm including an FTP server in my application using the pyftpdlib Python project. I've got it to work in active and passive mode, but I'm a bit concerned about when it closes its passive connections. I've tried connecting to it with both FileZilla and the default ftp command in Ubuntu, and in both cases, I get a new passive port for every request. That is, if I sit in the root folder and type ls 10 times, I use up 10 ports. This means that I have to allocate a big block of passive ports for the FTP server to use so it won't run out. As soon as the client disconnects, the server releases all the passive connections associated with that client and those ports can be reused. However, a long-running connection could use up a lot of ports.

    Read the article

  • Request Multiple Maya Floating Server Licenses for extra Satellite clients

    - by Rob
    Hello all: I am currently setting up a 'render farm' for Maya 2008 Unlimited. One Maya workstation license comes with the ability to render on eight satellite nodes. It works perfect, the remote rendering works like a charm. However, we have additional boxes to set up as satellite rendering nodes, and we have extra Maya workstation licenses. Ideally, the workstation can take two licenses and thus render on 16 nodes, but I haven't been able to figure it out, or determine if it is actually possible. It's a big project, where rendering the entire thing is in the scope of weeks, so the speed up would be worth it. Any thoughts?

    Read the article

  • Install Windows XP using USB

    - by AmanBe
    How to install windows xp from usb ? I have the iso image. My cdrom is not working. I read up something on internet about this issue but all the articles are just way too complex and big + they are all different so don't know which one to try. I want to know if someone has tried something like this and to tell me what's the best and easiest way, like some tool that will automatically write the iso file onto the flash drive and make it bootable or smth. Thank you in advance.

    Read the article

  • Any limitations for putting an SSD in a Mini? How fast would an external HDD be via Firewire? Is Ser

    - by Cyrcle
    I'm considering getting a Mini for web programming. I do a lot of text searches so I want to put a SSD in it. Does the Mini have any limitations that might effect the performance of a SSD? I'm trying to decide if I should get a Mini Server. I'd like to be able to have two internal drives so one can be SSD for OS and the code I'm working on, and the other can be my storage drive. However, I'm not sure if I'll be using the extra functionality of the server edition OSX or not, so I'm reluctant to pay the $200 premium. In a "regular" Mini I could put the SSD internal and use an external big drive, but would the external drive be fast enough via Firewire? Thanks in advance for any info.

    Read the article

  • How to prevent people taking software home?

    - by Robert MacLean
    Most companies I have worked at have had either a collection of disks or a network share with the installs of the commonly used software in them. This is to allow the IT dept and skilled users to install the software they need on their work machines very easily. However some users would see this as an opportunity to get "free" software for their home machines. I've seen the draconian approach of locking the machine down completely, but that does not work well (in my view - if you disagree feel free to comment on it) because You add so much extra work to IT Users get that big brother feeling So how do you find a way to prevent users from taking home software but still allowing them to install what they need? You can make the assumption that most of the users in the organisations I work in are smart enough to install software, I'm not worried about the tea lady here.

    Read the article

  • internet speed and routers are controlled by whom

    - by Ozgun Sunal
    i need to learn two things. each is related to other a bit. The first one is, while our LAN speed is usually 100 Mbps or at gigabit levels(very big compared to WAN speeds), WAN speed for instance DSL connections are far less than this. However, we are able to download huge files at those Mb speeds. Isn't this weird? [my real concern is why WAN speed is lower than LAN speeds] Who controls those routers around the large Internet? (while we, as web clients are connected to Internet, packets travel through those routers to the destination network/s).But, are those routers all inside the ISP network and if not, who controls those large numbers of routers?

    Read the article

  • How to limit my CPU power programmatically on Windows 7?

    - by Ivan
    Whenever I run a CPU-heavy activity (like compressing a big set of files into an archive for example) my CPU switches to its full throttle (maximum frequency) and shuts down of overheat in less than a minute. Instead, I would like it to keep slowed-down slightly to do the task a bit slower but be able to reach the finish. At the same time I don't want to dim my screen brightness or adjust anything else what standard Windows power-saving system does. So how do I actually set a cap to limit my CPU power? The CPU is Core 2 Duo T7250, the OS is Windows 7 32-bit, there seem to be no BIOS settings or jumpers available to configure the frequencies.

    Read the article

  • How can I send super large files directly to another computer in the Internet for free?

    - by Cruise
    I regulary need to transfer very large files (30 GB) to my friend - financial statistics. I don't have any problem with bandwidth: it is very broad here. I did some research in the area, so: 1. I would not use FTP, as it is very tricky to get it working behind a NAT. 2. I would not use Skype/MSN/ICQ, as it is not designed for file transfer and it underperforms on the huge files. 3. I would not use file-sharing services, as I need to pay for big files (30 GB is a problem here) and I don't like holding any piece of my data on the third-party server. So, I need some smart tool that will do what I need: sending files directly browser-to-browser and not browser-server-browser. Is it so complex? Is there some web application in the Internet that can do this?

    Read the article

  • Illustrator "Save for Web & Devices" returning crappy, pixelated images

    - by Tory Waterman
    I'm trying to create a nice title for my webpage... a big white title to sit on a black background. I'm using Illustrator to do so. When I create it, it looks nice, but when I hit "save for web & devices", it comes out looking like a pixelated piece of crap on the site. Is there some setting I need to change to make Illustrator save a higher resolution image? Thanks EDIT I understand, from looking at some other posts, that this may be a result of "posterization" or "dither", but this is only a plain white image so I don't how this results in a colors problem. (I could be completely misinterpreting these terms) EDIT Figured version might be important... I'm using CS5.1

    Read the article

  • Reocurring unpack failed on git repo improted from svn

    - by xavier
    I have a git repo created from svn with git-svn. Everything converted just fine, but from time to time, when I try to git push, I get: error: unpack failed: unpack-objects abnormal exit Other repos on our server (created from scratch or imported from svn) work fine. The solution is usually to unstage, commit and push files one by one, modify the one that fails (e.g. add a whitespace or something) and commit it once again. It's obviously very irritating, for big commits it's a productivity killer - and requires a lot of server pushes. I'd be grateful for any suggestions on where to look, I couldn't google anything up.

    Read the article

  • rsyslog server - Can you split up and organize logs?

    - by Jakobud
    I recently setup one of our servers as an rsyslog server. I now have our firewall setup to log everything to that rsyslog server. But there doesn't seem to be an organization of the logs. All the firewall logs are just being dumped into the /var/log/messages on the rsyslog server. I guess I was maybe expecting them to at least be in a machine specific log file or directory. How can I organize the incoming logging? If I setup 20 servers to all log everything to a central rsyslog server, I really don't want everything being dumped into one big file or a few files. How can I setup rsyslog to tell it where to log what? Like if all the logs for a specific server were in it's own directory/file, etc... Is this possible?

    Read the article

  • Ubuntu from console/command-line/shell

    - by Xolve
    Earlies linux distros though required lot of manual work they were quite good to use from commandline. If the X-server didn't start or you just want a shell to work they all supported. Network was configured by init; sound was up and ready; new devices inserted would be configured and their configureation was placed in fstab. Also there were small scripts I found on many distros which on X used windows while on console they switched to ncurses. But now this all needs GUI with a desktop manager (KDE, GNOME) for the new paradigms :'-( require GUI (NetworkManger, hal etc.). So if on just command line you have to be root, looks like they believe only geeky admins need that, and need to edit config files or type big commands. Any way so that this is easy in Ubnubtu through shell again.

    Read the article

  • Deleting Time Machine in Mac OS X 10.6.4

    - by cappuccino
    Does anyone know how to delete Time Machine in Mac OS X 10.6.4? Before answering: sudo rm -rf /whateverthetimemachineis does not work Disabling the ACL permissions first with sudo fsaclctl -p /whatever -d does not work, sudo: fsaclctl: command not found Use the delete all backup feature in Time Machine... this is slow as hell, would take days. Need a command line solution. No I don't want to reformat the drive, I have other content on it, and no don't say I should have separated on two partition or two drives, I did it this say since partitions cannot be dynamically changed, and two drives is annoying since, whats the point of having a big drive?... plus has no relation to the issue at hand. Already googlied for hours and read everything on Super User, nothing working. and all solutions are the first 4. Any clues?

    Read the article

  • Nginx save file to local disk

    - by Dean Chen
    My case is: In our China company, we have to access one web server in USA headquarter through Internet. But network is too slow, and we download many big image files. All our developers have to wait. So we want to setup a Nginx which acts as reverse proxy, its upstream is our USA web server. Question is can we make Nginx save the image files from USA web server into its local disk? I mean let Nginx act as one cache server.

    Read the article

  • How to view multiple log files as one file in unix/linux

    - by user42679
    Hi, I was wondering if there is a convenient way in linux/unix to read multiple log files as one. More specifically, I would like to view a sequence of log files (app.log, app.log.1 app.log.2, etc) as one big file using normal unix tools (vi, less, etc). When the EOF is read the tool will automatically move to the beginning of the next file. During my work I have to analyze uat/prod logs to investigate and solve problems. The fact that I need to traverse many log files disturbs my work and causes delays. Any ideas?

    Read the article

  • What motherboard for a Core i7 920 Processor?

    - by jasondavis
    I am wanting to build a really nice PC, price is not to important as I will buy pieces every week or whatever it takes until I get everything I need. I am not a gamer but I would like to watch video and have 3-4 monitors. I do a lot of programming and use a lot of big programs so I would like to go all out and get a lot of Memory, probably at least 12gb but possibly more even as I see many boards support up to 24gb now. Will be using Windows 7. I have decided to go with the Core i7 920 Processor BX80601920. Based on what I posted above, can you please recommend some good motherboards I should look at ? Also some good places to purchase them online. Thanks for any help, tips, etc.

    Read the article

  • Update Billing Email Addresses in Sage MAS90

    - by ThaKidd
    I have been tasked with trying to find a method of updating about 500 customer email addresses in MAS90. I recently discovered that they had the ability to email invoices to their customers and because I opened my big mouth, they now want me to find a way to add about 500 emails into the system so they do not have to perform the task manually. The office does have an SQL server which supports ASP sites which contains a list of all of the current email addresses. My plan was to use Microsoft Access 2007 with the MAS90 ODBC connector to attempt this update. My questions are: Is this the right way to go or is there a better method of obtaining these results? Does anyone happen to know which tables I should be looking at? Any and all help is appreciated. Thanks in advance!

    Read the article

  • Installing VMWare Tools in Windows Server 2008 fails system startup

    - by Hoghweed
    I recently created a vmware virtual machine with windows server 2008 enterprise as Guest. My host is Ubuntu 10.04 on my Lenovo laptop. I fall into a big trouble which makes my created VM unusable after I've installed VMWare Tools. After installing tools I'm able to run the system only in safe mode. After some event manager analysis I found the issue is with drivers installed by vnmware tools. Any one has got the same issue? Is there any good practice for doing that? The configuration of vm machine is the following CPU : 1 RAM : 1020 HD : 40GB Splitted files, SCSI CD : IDE Thanks in advance

    Read the article

  • two identical broadband lines working as one

    - by Katafalkas
    I have been trying to find an answer to this, but all I get is hobbyists trying to connect they linksys's and get some magic out of it. So I am thinking of a way I could combine two 100Mbp Fiber Optics lines into a single connection for our office. I assume it involves some CISCO learning or something like this. Was thinking that I might need to configure some big router to load-balance the NAT'ing in some way. I assume that meny of you have done something similar and maybe someone could share the knowledge or at least provide some tips ?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Automatically Log off Google when logging in using Google OpenID?

    - by Ross Charette
    I use Google as my OpenID provider. Once you log into a website with Google's OpenID I am then logged into Google as well. I do not desire this. Can I somehow automatically log off my Google account, or prevent Google from logging in every time I use my Google OpenID? I prefer not to have my personal google account always logged in. It's not a big deal going to gmail and clicking log off, but if there is a simpler way that would be good. Note to admins: this is not just for stack overflow, please don't close the question.

    Read the article

  • Standalone server setup for compute capacity

    - by mikera
    I'm developing an application for my company that will require a lot of compute capacity (running some very big mathematical calculations), and looking for some form of server setup to do this. For various reasons, we want to run this on-site in our office rather than hosting it externally. It's been a while since I last had to set up my own servers so I thought I would tap into the collective wisdom of serverfault! My broad requirements are: Budget $30-50k, with an aim to get as much compute capacity as possible for that budget 64-bit servers suitable to run Ubuntu Linux + Java Some relatively standalone rack that can be installed in secure office space Fast/low latency network connections between the servers, but don't really care about connectivity to the outside world Storage capacity shared between the servers - they don't necessarily need their own storage providing they can be booted from a common image Downtime can be tolerated (since the calculations are run in batch mode) The software itself is fault-tolerant, so there is no need for extra resiliency in the server setup (cheap replaceable commodity parts will be fine in general) Given these requirements what kind of setup would you recommend and why?

    Read the article

  • how to correctly download tomcat 6 on centos 5.5

    - by user582862
    hi guys, i am a big confused about how to install tomcat 6 on centos 5.5 final. this is what i am trying to do: # cd /etc/yum.repos.d/ # wget http://jpackage.org/jpackage50.repo # yum install tomcat6 tomcat6-webapps tomcat6-admin-webapps but when i type the widget command, this is what i get: Resolving www.jpackage.org... failed: Temporary failure in name resolution. wget: unable to resolve host address `www.jpackage.org' could anyone kindly show me the right way please. really in trouble at the moment with this. thanks in advance.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >