Search Results

Search found 2417 results on 97 pages for 'mb'.

Page 90/97 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • how to setup kismet.conf on Ubuntu

    - by Registered User
    I installed Kismet on my Ubuntu 10.04 machine as apt-get install kismet every thing seems to work fine. but when I launch it I see following error kismet Launching kismet_server: //usr/bin/kismet_server Suid priv-dropping disabled. This may not be secure. No specific sources given to be enabled, all will be enabled. Non-RFMon VAPs will be destroyed on multi-vap interfaces (ie, madwifi-ng) Enabling channel hopping. Enabling channel splitting. NOTICE: Disabling channel hopping, no enabled sources are able to change channel. Source 0 (addme): Opening none source interface none... FATAL: Please configure at least one packet source. Kismet will not function if no packet sources are defined in kismet.conf or on the command line. Please read the README for more information about configuring Kismet. Kismet exiting. Done. I followed this guide http://www.ubuntugeek.com/kismet-an-802-11-wireless-network-detector-sniffer-and-intrusion-detection-system.html#more-1776 how ever in kismet.conf I am not clear with following line source=none,none,addme as to what should I change this to. lspci -vnn shows 0c:00.0 Network controller [0280]: Broadcom Corporation BCM4312 802.11b/g [14e4:4315] (rev 01) Subsystem: Dell Device [1028:000c] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at f69fc000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, ssb and iwconfig shows lo no wireless extensions. eth0 no wireless extensions. eth1 IEEE 802.11bg ESSID:"WIKUCD" Mode:Managed Frequency:2.462 GHz Access Point: <00:43:92:21:H5:09> Bit Rate=11 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Managementmode:All packets received Link Quality=1/5 Signal level=-81 dBm Noise level=-90 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:169 Invalid misc:0 Missed beacon:0 So what should I be putting in place of source=none,none,addme with output I mentioned above ?

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

  • Fix single entry from mbr

    - by Sander
    I use EasyBCD to manage my tripleboot of (1) Windows Server 2008 R2, (2) Windows 7 Professional and (3) Ubuntu Linux. While trying to change the order of my boot menu I ended up losing the Windows Server entry. Luckily I had a boot menu backup (.bcd file) that allowed me to restore my boot menu using EasyBCD. However, when I now select the Windows Server option in my boot menu the Windows Server Recovery Environment starts up. So I have to select language/keyboard layout/etc. and then I have 3 options as shown in the image below. . My goal is to fix the one corrupted Windows Server entry from my boot menu without messing up or losing the two other ones. I'm guessing the Recovery Console (Command Prompt) is the next step and that I will be needing bootrec.exe. But when consulting this page: Use the Bootrec.exe tool in the Windows Recovery Environment to troubleshoot and repair startup issues in Windows (about half way down there's a link that shows the bootrec.exe options) I'm getting uncertain. The page lists 4 options for bootrec.exe : /FixMbr /FixBoot /ScanOs /RebuildBcd What option do I need to fix just the server entry of my boot menu? Thanks in advance, Sander P.S. All three OS's are on the same physical disk (3 different partitions). Disk layout: System reserved (primary partition, 100 MB) Windows 7 (primary parition, 150 GB) Windows Server 2008 (primary partition, 150 GB) Extended partition (linux partitions (/,/swap,/home), 150GB + data partition, 150 GB) P.P.S. This is what my boot menu looks like using EasyBCD (Detailed/Debug mode) on my Windows 7 installation. Windows Boot Manager -------------------- identifier {9dea862c-5cdd-4e70-acc1-f32b344d4795} device partition=\Device\HarddiskVolume1 description Windows Boot Manager locale en-US inherit {7ea2e1ac-2e61-4728-aaa3-896d9d0a9f0e} default {93f90e43-cae8-11df-b05a-c9177e705936} resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} displayorder {93f90e43-cae8-11df-b05a-c9177e705936} {93f90e3f-cae8-11df-b05a-c9177e705936} {93f90e46-cae8-11df-b05a-c9177e705936} toolsdisplayorder {b2721d73-1db4-4c62-bf78-c548a880142d} timeout 10 displaybootmenu Yes Windows Boot Loader ------------------- identifier {93f90e43-cae8-11df-b05a-c9177e705936} device partition=\Device\HarddiskVolume3 path \Windows\system32\winload.exe description Windows Server 2008 R2 - Standard locale en-US inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e44-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=\Device\HarddiskVolume3 systemroot \Windows resumeobject {93f90e42-cae8-11df-b05a-c9177e705936} nx OptOut Windows Boot Loader ------------------- identifier {93f90e3f-cae8-11df-b05a-c9177e705936} device partition=C: path \Windows\system32\winload.exe description Windows 7 - Professional locale nl-NL inherit {6efb52bf-1766-41db-a6b3-0ee5eff72bd7} recoverysequence {93f90e40-cae8-11df-b05a-c9177e705936} recoveryenabled Yes osdevice partition=C: systemroot \Windows resumeobject {93f90e3e-cae8-11df-b05a-c9177e705936} nx OptIn Real-mode Boot Sector --------------------- identifier {93f90e46-cae8-11df-b05a-c9177e705936} device partition=C: path \NST\AutoNeoGrub0.mbr description Ubuntu 10.04 - Lucid Lynx

    Read the article

  • A gigabit network interface is CPU-limited to 25MB/s. How can I maximize the throughput?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? As a reference, the other computers in the network can usually transfer at 50+MB/s without problems. A computer with a Core 2 CPU generates only 5000 interrupts per second when it's transferring at 110MB/s. The number of interrupts is about 20 times less than the Atom system (if interrupts scale linearly with throughput.) And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Complete machine freezes...at a loss

    - by user28818
    Guys, We built around 12 machines a few months ago to run Ubuntu. They each have the following specs: ASUS Z8NA-D6 motherboard Dual quad core Intel(R) Xeon(R) CPU E5520 @ 2.27GHz OCZ Mod Extreme Pro 500W power supply 12 GB Kingston RAM Nvidia GeForce 9800 GT graphics card My machine ran well for awhile. However, it started experiencing random lockups. These lockups are not X lockups, they are complete system freezes. The nic stops responding, the magic sysrq buttons won't work. The machine is dead. I first suspected RAM. Memtest86 didn't find anything, but I replaced the RAM anyway. Still, lockups. So I replaced the graphics card. Still, more lockups. They became more and more frequent and started to happen 2-3 times a day. So I replaced the motherboard and power supply in one fell swoop. Suddenly, no more lockups! Woohoo! Except, a week later, in the morning, the machine wouldn't wake up. I reset it, started it up, and the log files showed the last entry at around 11 pm the evening before. This has started occurring with more frequency...now just about every morning I come in, the machine is locked up, and has been since the night before. Yesterday, in the 3 weeks since I replaced the motherboard and power supply, the machine actually locked up on in in mid-work. This is the first time since replacing the two (MB and PS) that this happened while I was using it. All others occurred while I was away. I'm at a loss. Nothing is in syslog or message that would indicate a problem around the time of the lockup. Temps are good...I use lmsensors to monitor and have a script that writes the output to file every minute. They never get that high. The only thing I haven't replaced at this point is the case and the harddrives. I doubt either could be the cause. What would you do if you were in my shoes? Is there a troubleshooting approach I'm missing? For the record, all of the other machines, all eleven of them, don't have any problems. They're all running the same version of Ubuntu (Lucid) that I am. Thanks!

    Read the article

  • Linux Best Practices

    - by Zac
    I'm a life-long Windows developer switching over to Linux for the first time, and I'm starting off with Ubuntu to ease the learning curve. My new laptop will primarily be a development machine: 6GB RAM, 320 GB HD. I'd like there to be 2 non-root users: (a) Development, which will always be me, and (b) Guest, for anyone else. I assume the root user is added by default, like System Administrator in Windows. (1) I'd like to mount /home to its own partition, but how does this work if I have two user accounts (Development and Guest)? Are there 2 separate /home directories, or do they get shared? Is it possible to allocate more space for Development and only a tiny bit of space for Guest in GRUB2? How?!?! (2) I'm assuming that its okay that all of my development tools (Eclipse & plugins, SVN, JUnit, ant, etc.) and Java will end up getting installed in non-/home directories such as /usr and /opt, but that my Eclipse/SVN workspace will live under my /home directory on a separate partition... any problems, issues, concerns with that? (3) As far as partitioning schemes, nothing too complicated, but not plain Jane either: Boot Partition, 512 MB, in case I want to install other OSes Ubuntu & non-/home file system, 187.5 GB Swap Partition, 12 GB = RAM x 2 /home Partition, 120 GB I don't have any bulky media data (I don't have music or video libraries, this is a lean and mean dev machine) so having 320 GB is like winning the lottery and not knowing what to do with all this space. I figured I'd give a little extra space to the OS/FS partition since I'll be running JEE containers locally and doing a lot of file IO, logging and other memory-instensive operations. Any issues, problems, concerns, suggestions? (4) I was thinking about using ext4; seems to have good filestamping without any space ceiling for me to hit. Any other suggestions for a dev machine? (5) I read somewhere that you need to be careful when you install software as the root user, but I can't remember why. What general caveats do I need to be aware of when doing things (installing packages, making system configurations, etc.) as root vs "Development" user? Thanks!

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    i have exported a pair of cookies from firefox that are valid for the URL in question and tried accessing/downloading the protected content off that addr., but the end result is a return to the login page. i have tried doing about the same thing for 3 other websites with similiar outcome. any clues as to what might i be doing wrong? the syntax i'm using: wget --load--cokies=FILE URL DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org = 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Apache error log interpretation

    - by HTF
    It looks like someone gained access to my server. How I can find out which Apache vHosts this log is related to? How these commands from the log are invoked and how/why they are printed to the log file - is this some remote shell or PHP script? /var/log/httpd/error_log mkdir: cannot create directory `/tmp/.kdso': File exists --2014-06-13 13:29:17-- http://updates.dyndn-web.com/abc.txt Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5055 (4.9K) [text/plain] Saving to: `abc.txt' 0K .... 100% 303K=0.02s 2014-06-13 13:29:17 (303 KB/s) - `abc.txt' saved [5055/5055] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M101 5055 101 5055 0 0 79686 0 --:--:-- --:--:-- --:--:-- 154k minerd64: no process killed minerd32: no process killed named: no process killed kernelupdates: no process killed kernelcfg: no process killed kernelorg: no process killed ls: cannot access /tmp/.ICE-unix: No such file or directory mkdir: cannot create directory `/tmp': File exists --2014-06-13 13:29:18-- http://updates.dyndn-web.com/64.tar.gz Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 205812 (201K) [application/x-tar] Saving to: `64.tar.gz' 0K .......... .......... .......... .......... .......... 24% 990K 0s 50K .......... .......... .......... .......... .......... 49% 2.74M 0s 100K .......... .......... .......... .......... .......... 74% 2.96M 0s 150K .......... .......... .......... .......... .......... 99% 3.49M 0s 200K 100% 17.4M=0.1s 2014-06-13 13:29:18 (1.99 MB/s) - `64.tar.gz' saved [205812/205812] sh: ./kernelupgrade: Permission denied

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • No external network on ubuntu 9.10, though dns works..

    - by user29368
    Hi, I have a weird problem I cant solve. I have several computers, two with xubuntu 9.10 One of them, acting as a media server, has stopped to work when it comes to external network.. I can do for example: ping google.com Which gives me an ip adress back, like: name@Media:/etc$ ping google.com PING google.com (66.102.9.147) 56(84) bytes of data. That tells me it reaches the dns?, but I get no response at all... If I ping a local computer all works fine. I can also reach the computer via ssh without any problems. I have always used network manager, but now I uninstalled it and made the settings manually like this: /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.52 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 Still no luck. I have no specific settings for this one in my router, and all the other computers, including my win laptop works fine. This is very annoying since I cant even do an update or anything.. ifconfig looks like this: eth0 Link encap:Ethernet HWaddr 00:24:1d:9f:10:89 inet addr:192.168.1.52 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::224:1dff:fe9f:1089/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15410 errors:0 dropped:0 overruns:0 frame:0 TX packets:2693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1167398 (1.1 MB) TX bytes:694973 (694.9 KB) Interrupt:27 Base address:0xe000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2150 errors:0 dropped:0 overruns:0 frame:0 TX packets:2150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:143456 (143.4 KB) TX bytes:143456 (143.4 KB) route -n like this Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 I do not know where the adress starting with 169.254 comes from.. Could that be a part of the problem? Hoping for some assistance since Im totally stuck here.. /george

    Read the article

  • Windows 7 startup MUCH slower after reinstall (on an SSD)

    - by user326639
    I installed Windows 7 Prof 64 bits OEM (Spanish) on my new machine. As I wanted my Windows to be in English, the web shop where I bought the DVD recomended me to download an ISO file with the same Windows version (but in English), burn it on a DVD and install it. And that I should be able to use my registration code. Location ISO: http://msft-dnl.digitalrivercontent.net/msvista/pub/X15-65805/X15-65805.iso I've done this and everything works (I have not activated my Windows yet but I expect no problem there). Just one thing: its startup is MUCH slower now! Have a look at my PC specs (bottom). On my first install (Spanish), it was like: - motherboard splash screen -- shows for a second or two - list of found drives -- a few seconds - the text "Windows starting" -- about a second before the dots appear - four collored dots form the Windows logo -- a few seconds after the logo is fully formed it moves on to the login screen. On my second install (English): - motherboard splash screen -- shows for 15 seconds - list of found drives -- a few seconds - the text "Windows starting" -- shows for 40 seconds before the dots appear - four collored dots form the Windows logo -- now it moves on to the login screen about equally fast as before. Ones it's up and running it seems to be as responsive as before, although it's possible that I'm not noticing the difference. I did the first install on the virgin SSD drive straight from the box. The second time I let the Windows installation program format the drive first to get rid of the old installation. I noticed that there were two partitions on my SSD: partition 1, 100 Mb, "reserved for the system" and partition 2, 111.7 Gb. I only formated the big partition, and I left the system partition untouched. Between the two installs, I didn't open the computer so everything is connected to the same port. I did not change anything in BIOS. Has Windows not recognized my SSD as an SSD but as a normal HDD. I suspect that Windows has not done the neccesary automatic configuration settings that it should do for SSD's (but that's just a hunch). How do I get my SSD back into its virgin state, as if it came right from the box, so I can go for a 3rd attempt to install windows. Should I use DISKPART? Other ideas are welcome. Specifications: mobo: Gigabyte GA-Z68X-UD3H-B3 CPU: i7-2600K SSD: OCZ Agility3 2,5" HDD: Samsung Spinpoint F4 mem: Kingston HyperX DIMM 8 Gb DDR3-1600

    Read the article

  • is it a good idea to change a recovery partition from primary to logical? [HP laptop]

    - by DiegoDD
    I have a new HP laptop, model dv6-6c85la, with 1TB hard drive, and it has 4 primary partitions, like this: |<- system [199 MB] -|<- c: [899.8 GB] -|<- d:(recovery) [27.5 GB] -|<- e:(hp_tools) [4 GB] -| I wanted to make another partition, splitting "C" which is the main partition, into TWO partitions, and leave the rest as it is. but it doesn't let me because they are already 4 primary partitions (the ones in the diagram). I read somewhere, that i could in fact split C into 2 partitions, but only if the adjacent partition (in this case d:(recovery) is converted into a "logical" partition. That way, the new unallocated part taken from C, and the recovery partition, would each be logical, "inside" an extended partition (right???) As i understand, the resulting partitions would be: primary (system, no letter), primary (c:), extended [ logical (x:) | logical(d:recovery) ], primary (e: hp_tools) "x" being the new one. am i correct? My question is, if i do convert the recovery partition to logical (and thus, it is inside an extended partition adjacent to the new "x:" one), would i have any problems when in case of a disaster i would like to restore the system using the now logical instead of primary RECOVERY partition? Or it is completely safe to change it to logical? My main concern is because i think i may need to be primary so the recovery can proceed in boot time? Or i am completely wrong? how does the recovery process happens? I also understand that i can simply create recovery media, in DVDs, and then even i would be able to delete that recovery partition completely, but as of now, i don't want to do that. I may create the disks, but i don't want to delete the partition, simply because it would be a lot faster and easier to recover from a hard drive than disks. Wrapping up: if i change a recovery partition from primary to logical, will the system still be capable of using it to recover? or it NEEDS to be primary to work? The whole point is that i want to split C:, but as things are, i cant directly, i'd need to change the recovery partition to logical. Or is there another way? thanks.

    Read the article

  • JSF index out of bounds exception when submiting a form

    - by selvin
    When I click submit button in a JSF form the following exception occurs. It says an Indexout of bounds exception, but I did not use any ArrayList associated with the code. Is this a bug? what should i do to get rid of this error.. Mojarra: 2.0.2 FCS with primefaces 2.2 JSF: 2.0 NetBeans IDE 6.8 Glassfish Domain V3 Form Code: <p:panel id="jobres" style="min-width: 200px" header="Reservation" widgetVar="jres" closable="true" toggleable="true" > <h:form id="arj" prependId="false" style="width:550px;max-height:400px;overflow:auto;"> <p:tooltip global="true"/> <h:panelGrid columns="2" > <p:panel style="min-width: 220px"> <h:outputLabel value="1.Job type:"/> </p:panel> <h:panelGroup> <h:messages id="aerr"/> <h:selectOneMenu title="Choose a Jobtype" value="#{arjob.jobtype}"> <f:selectItem itemLabel="Sequential" itemValue="sequential"/> <f:selectItem itemLabel="Parallel" itemValue="parallel"/> </h:selectOneMenu> </h:panelGroup> <p:panel> <h:outputLabel value="2.Executable: *"/> </p:panel> <h:panelGroup> <p:fileUpload id="aexeupload" fileUploadListener="#{arjob.chooseListener}" auto="true" update="adlist :erdialog" description="Resource Files"> </p:fileUpload> <br/> <h:panelGroup id="aexelistwrapper"> <p:dataList var="fileList" type="ordered" id="adlist" value="#{arjob.fexelist}"> <p:column> #{fileList}&nbsp; <p:commandLink ajax="true" update="aexelistwrapper" actionListener="#{arjob.removeExe(fileList)}"> <p:graphicImage value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGroup> </h:panelGroup> <p:panel> <h:outputLabel value="3.Argument(s):"/> </p:panel> <h:panelGroup> <u style="color:orange"> <i> <p:inplace emptyLabel="Add Arguments" onEditUpdate="aarglist"> <h:inputText title="Enter the arguments" id="aiparg" value="#{arjob.args}"> <f:ajax event="valueChange"/> </h:inputText> <p:commandButton update="aarglistwrapper erdialog" value="add" actionListener="#{arjob.addArg}"/> </p:inplace> </i> </u> </h:panelGroup> <h:panelGroup> </h:panelGroup> <h:panelGroup> <h:panelGrid id="aarglistwrapper"> <p:dataList id="aarglist" type="ordered" var="args" value="#{arjob.arglist}"> <p:column id="col2"> #{args}&nbsp; <p:commandLink ajax="true" update="arj:arglistwrapper" actionListener="#{arjob.removeArgs(args)}"> <p:graphicImage title="remove" value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGrid> </h:panelGroup> <p:panel> <h:outputLabel value="4.InputFile(s): *"/> </p:panel> <h:panelGroup> <p:fileUpload id="ainpupload" fileUploadListener="#{arjob.inputChooseListener}" auto="true" update="aipfilelistwrapper :erdialog" description="Resource Files"> </p:fileUpload> <br/> <h:panelGroup id="aipfilelistwrapper"> <p:dataList var="ipfile" type="ordered" id="aipflist" value="#{arjob.finlist}"> <p:column> #{ipfile}&nbsp; <p:commandLink ajax="true" update="aipfilelistwrapper" actionListener="#{arjob.removeInfile(ipfile)}"> <p:graphicImage value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGroup> </h:panelGroup> <h:panelGroup> <p:panel > 5)Output File(s): </p:panel> </h:panelGroup> <h:panelGroup> <u style="color:orange"> <i> <p:inplace emptyLabel="Add file name" id="aipexe"> <h:inputText title="Enter the output filenames" id="aexe" value="#{arjob.ofilename}"> <f:ajax event="valueChange"/> </h:inputText> <p:commandButton update="adoutlist :erdialog" value="add" actionListener="#{arjob.addOutfile}"/> </p:inplace> </i> </u> </h:panelGroup> <h:panelGroup> </h:panelGroup> <h:panelGroup> <h:panelGrid id="afilelistwrapper"> <p:dataList id="adoutlist" type="ordered" var="ofile" value="#{arjob.foutlist}"> <p:column id="acol"> #{ofile}&nbsp; <p:commandLink ajax="true" update="afilelistwrapper" actionListener="#{arjob.removeOutfile(ofile)}"> <p:graphicImage value="images/closebar.png"/> </p:commandLink> </p:column> </p:dataList> </h:panelGrid> </h:panelGroup> <p:panel> <h:outputLabel value="6) Operating System"/> </p:panel> <h:panelGroup> <h:selectOneMenu title="Select an OperatingSystem" value="#{arjob.os}"> <f:selectItem itemLabel="CentOS Release 5.2" itemValue="Cent OS 5.2"/> <f:selectItem itemLabel="RHEL Server Release 5" itemValue="RHEL server 5"/> <f:selectItem itemLabel="RHEL Server Release 5.2" itemValue="RHEL server 5.2"/> </h:selectOneMenu> </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="7) Physical Memory:"/> </p:panel> </h:panelGroup> <h:panelGroup> <p:spinner min="0" style="width: 100px" stepFactor="10" value="#{arjob.mem}"> </p:spinner>(MB) </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="8) Disk Space:"/> </p:panel> </h:panelGroup> <h:panelGroup> <p:spinner min="0" style="width: 100px" stepFactor="10" value="#{arjob.diskspace}"> </p:spinner>(MB) </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="9) CPU Mhz:"/> </p:panel> </h:panelGroup> <h:panelGroup> <p:spinner min="0" style="width: 100px" stepFactor="10" value="#{arjob.cpumhz}"> </p:spinner>(Mhz) </h:panelGroup> <p:panel> <h:outputLabel value="10) Start Time:"/> </p:panel> <h:panelGroup> <p:inputMask title="(YYYY-MM-DD HH:MM:SS)" mask="9999-99-99 99:99:99" value="#{arjob.startt}"> </p:inputMask> </h:panelGroup> <p:panel> <h:outputLabel value="11) End Time:"/> </p:panel> <h:panelGroup> <p:inputMask title="(YYYY-MM-DD HH:MM:SS)" mask="9999-99-99 99:99:99" value="#{arjob.endt}"> <p:ajax event="valueChange"/> </p:inputMask> </h:panelGroup> <p:panel> <h:outputLabel value="12) LRMS type"/> </p:panel> <h:panelGroup> <h:selectOneMenu value="#{arjob.lrms}"> <f:selectItem itemLabel="PBS" itemValue="PBS"/> <f:selectItem itemLabel="SGE" itemValue="SGE"/> </h:selectOneMenu> </h:panelGroup> <h:panelGroup> <p:panel> <h:outputLabel value="13)Number of Nodes: *"/> </p:panel> </h:panelGroup> <h:panelGroup> <h:panelGroup> <p:spinner style="width: 100px" min="1" max="100" value="#{arjob.numnodes}"> </p:spinner> </h:panelGroup> </h:panelGroup> <h:panelGroup> </h:panelGroup> <h:panelGroup> <p:commandButton ajax="false" value="Submit" action="#{arjob.jobSubmitAction}"/> </h:panelGroup> </h:panelGrid> </h:form> <p:draggable for="jobres" handle=".ui-panel-titlebar"/> </p:panel> Exception: SEVERE: javax.faces.FacesException: Unexpected error restoring state for component with id j_idt7. Cause: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0. at com.sun.faces.application.view.StateManagementStrategyImpl$2.visit(StateManagementStrategyImpl.java:239) at com.sun.faces.component.visit.FullVisitContext.invokeVisitCallback(FullVisitContext.java:147) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1446) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1457) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1457) at javax.faces.component.UIComponent.visitTree(UIComponent.java:1457) at com.sun.faces.application.view.StateManagementStrategyImpl.restoreView(StateManagementStrategyImpl.java:223) at com.sun.faces.application.StateManagerImpl.restoreView(StateManagerImpl.java:177) at com.sun.faces.application.view.ViewHandlingStrategy.restoreView(ViewHandlingStrategy.java:131) at com.sun.faces.application.view.FaceletViewHandlingStrategy.restoreView(FaceletViewHandlingStrategy.java:430) at com.sun.faces.application.view.MultiViewHandler.restoreView(MultiViewHandler.java:143) at javax.faces.application.ViewHandlerWrapper.restoreView(ViewHandlerWrapper.java:288) at com.sun.faces.lifecycle.RestoreViewPhase.execute(RestoreViewPhase.java:199) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.RestoreViewPhase.doPhase(RestoreViewPhase.java:110) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1523) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:343) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.primefaces.webapp.filter.FileUploadFilter.doFilter(FileUploadFilter.java:79) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:277) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:188) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:641) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:97) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:85) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:185) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:332) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:233) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:165) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.RangeCheck(ArrayList.java:547) at java.util.ArrayList.get(ArrayList.java:322) at javax.faces.component.AttachedObjectListHolder.restoreState(AttachedObjectListHolder.java:161) at javax.faces.component.UIComponentBase.restoreState(UIComponentBase.java:1427) at com.sun.faces.application.view.StateManagementStrategyImpl$2.visit(StateManagementStrategyImpl.java:231) ... 45 more

    Read the article

  • how to generate tinymce to ajax generated textarea

    - by Jai_pans
    Hi, i have a image multi-uloader script which also each item uploaded was preview 1st b4 it submitted and each images has its following textarea which are also generated by javascript and my problem is i want to use the tinymce editor to each textarea generated by the ajax. Any help will be appreciated.. here is my script function fileQueueError(file, errorCode, message) { try { var imageName = "error.gif"; var errorName = ""; if (errorCode === SWFUpload.errorCode_QUEUE_LIMIT_EXCEEDED) { errorName = "You have attempted to queue too many files."; } if (errorName !== "") { alert(errorName); return; } switch (errorCode) { case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: imageName = "zerobyte.gif"; break; case SWFUpload.QUEUE_ERROR.FILE_EXCEEDS_SIZE_LIMIT: imageName = "toobig.gif"; break; case SWFUpload.QUEUE_ERROR.ZERO_BYTE_FILE: case SWFUpload.QUEUE_ERROR.INVALID_FILETYPE: default: alert(message); break; } addImage("images/" + imageName); } catch (ex) { this.debug(ex); } } function fileDialogComplete(numFilesSelected, numFilesQueued) { try { if (numFilesQueued 0) { this.startUpload(); } } catch (ex) { this.debug(ex); } } function uploadProgress(file, bytesLoaded) { try { var percent = Math.ceil((bytesLoaded / file.size) * 100); var progress = new FileProgress(file, this.customSettings.upload_target); progress.setProgress(percent); if (percent === 100) { progress.setStatus("Creating thumbnail..."); progress.toggleCancel(false, this); } else { progress.setStatus("Uploading..."); progress.toggleCancel(true, this); } } catch (ex) { this.debug(ex); } } function uploadSuccess(file, serverData) { try { var progress = new FileProgress(file, this.customSettings.upload_target); if (serverData.substring(0, 7) === "FILEID:") { addRow("tableID","thumbnail.php?id=" + serverData.substring(7),file.name); //setup(); //generateTinyMCE('itemdescription[]'); progress.setStatus("Thumbnail Created."); progress.toggleCancel(false); } else { addImage("images/error.gif"); progress.setStatus("Error."); progress.toggleCancel(false); alert(serverData); } } catch (ex) { this.debug(ex); } } function uploadComplete(file) { try { /* I want the next upload to continue automatically so I'll call startUpload here */ if (this.getStats().files_queued 0) { this.startUpload(); } else { var progress = new FileProgress(file, this.customSettings.upload_target); progress.setComplete(); progress.setStatus("All images received."); progress.toggleCancel(false); } } catch (ex) { this.debug(ex); } } function uploadError(file, errorCode, message) { var imageName = "error.gif"; var progress; try { switch (errorCode) { case SWFUpload.UPLOAD_ERROR.FILE_CANCELLED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Cancelled"); progress.toggleCancel(false); } catch (ex1) { this.debug(ex1); } break; case SWFUpload.UPLOAD_ERROR.UPLOAD_STOPPED: try { progress = new FileProgress(file, this.customSettings.upload_target); progress.setCancelled(); progress.setStatus("Stopped"); progress.toggleCancel(true); } catch (ex2) { this.debug(ex2); } case SWFUpload.UPLOAD_ERROR.UPLOAD_LIMIT_EXCEEDED: imageName = "uploadlimit.gif"; break; default: alert(message); break; } addImage("images/" + imageName); } catch (ex3) { this.debug(ex3); } } function addRow(tableID,src,filename) { var table = document.getElementById(tableID); var rowCount = table.rows.length; var row = table.insertRow(rowCount); rowCount + 1; row.id = "row"+rowCount; var cell0 = row.insertCell(0); cell0.innerHTML = rowCount; cell0.style.background = "#FFFFFF"; var cell1 = row.insertCell(1); cell1.align = "center"; cell1.style.background = "#FFFFFF"; var imahe = document.createElement("img"); imahe.setAttribute("src",src); var hidden = document.createElement("input"); hidden.setAttribute("type","hidden"); hidden.setAttribute("name","filename[]"); hidden.setAttribute("value",filename); /*var hidden2 = document.createElement("input"); hidden2.setAttribute("type","hidden"); hidden2.setAttribute("name","filename[]"); hidden2.setAttribute("value",filename); cell1.appendChild(hidden2);*/ cell1.appendChild(hidden); cell1.appendChild(imahe); var cell2 = row.insertCell(2); cell2.align = "left"; cell2.valign = "top"; cell2.style.background = "#FFFFFF"; //tr1.appendChild(td1); var div2 = document.createElement("div"); div2.style.padding ="0 0 0 10px"; div2.style.width = "400px"; var alink = document.createElement("a"); //alink.style.margin="40px 0 0 0"; alink.href ="#"; alink.innerHTML ="Cancel"; alink.onclick= function () { document.getElementById(row.id).style.display='none'; document.getElementById(textfield.id).disabled='disabled'; }; var div = document.createElement("div"); div.style.margin="10px 0"; div.appendChild(alink); var textfield = document.createElement("input"); textfield.id = "file"+rowCount; textfield.type = "text"; textfield.name = "itemname[]"; textfield.style.margin = "10px 0"; textfield.style.width = "400px"; textfield.value = "Item Name"; textfield.onclick= function(){ //textfield.value=""; if(textfield.value=="Item Name") textfield.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(price.value=="") price.value="Item Price"; } var desc = document.createElement("textarea"); desc.name = "itemdescription[]"; desc.cols = "80"; desc.rows = "4"; desc.innerHTML = "Item Description"; desc.onclick = function(){ if(desc.innerHTML== "Item Description") desc.innerHTML = ""; if(textfield.value=="Item name" || textfield.value=="") textfield.value="Item Name"; if(price.value=="") price.value="Item Price"; } var price = document.createElement("input"); price.id = "file"+rowCount; price.type = "text"; price.name = "itemprice[]"; price.style.margin = "10px 0"; price.style.width = "400px"; price.value = "Item Price"; price.onclick= function(){ if(price.value=="Item Price") price.value=""; if(desc.innerHTML=="") desc.innerHTML ="Item Description"; if(textfield.value=="") textfield.value="Item Name"; } var span = document.createElement("span"); span.innerHTML = "View"; span.style.width = "auto"; span.style.padding = "10px 0"; var view = document.createElement("input"); view.id = "file"+rowCount; view.type = "checkbox"; view.name = "publicview[]"; view.value = "y"; view.checked = "checked"; var div3 = document.createElement("div"); div3.appendChild(span); div3.appendChild(view); var div4 = document.createElement("div"); div4.style.padding = "10px 0"; var span2 = document.createElement("span"); span2.innerHTML = "Default Display"; span2.style.width = "auto"; span2.style.padding = "10px 0"; var radio = document.createElement("input"); radio.type = "radio"; radio.name = "setdefault"; radio.value = "y"; div4.appendChild(span2); div4.appendChild(radio); div2.appendChild(div); //div2.appendChild(label); //div2.appendChild(table); div2.appendChild(textfield); div2.appendChild(desc); div2.appendChild(price); div2.appendChild(div3); div2.appendChild(div4); cell2.appendChild(div2); } function addImage(src,val_id) { var newImg = document.createElement("img"); newImg.style.margin = "5px 50px 5px 5px"; newImg.style.display= "inline"; newImg.id=val_id; document.getElementById("thumbnails").appendChild(newImg); if (newImg.filters) { try { newImg.filters.item("DXImageTransform.Microsoft.Alpha").opacity = 0; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. newImg.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + 0 + ')'; } } else { newImg.style.opacity = 0; } newImg.onload = function () { fadeIn(newImg, 0); }; newImg.src = src; } function fadeIn(element, opacity) { var reduceOpacityBy = 5; var rate = 30; // 15 fps if (opacity < 100) { opacity += reduceOpacityBy; if (opacity > 100) { opacity = 100; } if (element.filters) { try { element.filters.item("DXImageTransform.Microsoft.Alpha").opacity = opacity; } catch (e) { // If it is not set initially, the browser will throw an error. This will set it if it is not set yet. element.style.filter = 'progid:DXImageTransform.Microsoft.Alpha(opacity=' + opacity + ')'; } } else { element.style.opacity = opacity / 100; } } if (opacity < 100) { setTimeout(function () { fadeIn(element, opacity); }, rate); } } /* ************************************** * FileProgress Object * Control object for displaying file info * ************************************** */ function FileProgress(file, targetID) { this.fileProgressID = "divFileProgress"; this.fileProgressWrapper = document.getElementById(this.fileProgressID); if (!this.fileProgressWrapper) { this.fileProgressWrapper = document.createElement("div"); this.fileProgressWrapper.className = "progressWrapper"; this.fileProgressWrapper.id = this.fileProgressID; this.fileProgressElement = document.createElement("div"); this.fileProgressElement.className = "progressContainer"; var progressCancel = document.createElement("a"); progressCancel.className = "progressCancel"; progressCancel.href = "#"; progressCancel.style.visibility = "hidden"; progressCancel.appendChild(document.createTextNode(" ")); var progressText = document.createElement("div"); progressText.className = "progressName"; progressText.appendChild(document.createTextNode(file.name)); var progressBar = document.createElement("div"); progressBar.className = "progressBarInProgress"; var progressStatus = document.createElement("div"); progressStatus.className = "progressBarStatus"; progressStatus.innerHTML = "&nbsp;"; this.fileProgressElement.appendChild(progressCancel); this.fileProgressElement.appendChild(progressText); this.fileProgressElement.appendChild(progressStatus); this.fileProgressElement.appendChild(progressBar); this.fileProgressWrapper.appendChild(this.fileProgressElement); document.getElementById(targetID).appendChild(this.fileProgressWrapper); fadeIn(this.fileProgressWrapper, 0); } else { this.fileProgressElement = this.fileProgressWrapper.firstChild; this.fileProgressElement.childNodes[1].firstChild.nodeValue = file.name; } this.height = this.fileProgressWrapper.offsetHeight; } FileProgress.prototype.setProgress = function (percentage) { this.fileProgressElement.className = "progressContainer green"; this.fileProgressElement.childNodes[3].className = "progressBarInProgress"; this.fileProgressElement.childNodes[3].style.width = percentage + "%"; }; FileProgress.prototype.setComplete = function () { this.fileProgressElement.className = "progressContainer blue"; this.fileProgressElement.childNodes[3].className = "progressBarComplete"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setError = function () { this.fileProgressElement.className = "progressContainer red"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setCancelled = function () { this.fileProgressElement.className = "progressContainer"; this.fileProgressElement.childNodes[3].className = "progressBarError"; this.fileProgressElement.childNodes[3].style.width = ""; }; FileProgress.prototype.setStatus = function (status) { this.fileProgressElement.childNodes[2].innerHTML = status; }; FileProgress.prototype.toggleCancel = function (show, swfuploadInstance) { this.fileProgressElement.childNodes[0].style.visibility = show ? "visible" : "hidden"; if (swfuploadInstance) { var fileID = this.fileProgressID; this.fileProgressElement.childNodes[0].onclick = function () { swfuploadInstance.cancelUpload(fileID); return false; }; } }; i am using a swfuploader an i jst added a input fields and a textarea when it preview the images which ready to be uploaded and from my html i have this script var swfu; window.onload = function () { swfu = new SWFUpload({ // Backend Settings upload_url: "../we_modules/upload.php", // Relative to the SWF file or absolute post_params: {"PHPSESSID": ""}, // File Upload Settings file_size_limit : "20 MB", // 2MB file_types : "*.*", //file_types : "", file_types_description : "jpg", file_upload_limit : "0", file_queue_limit : "0", // Event Handler Settings - these functions as defined in Handlers.js // The handlers are not part of SWFUpload but are part of my website and control how // my website reacts to the SWFUpload events. //file_queued_handler : fileQueued, file_queue_error_handler : fileQueueError, file_dialog_complete_handler : fileDialogComplete, upload_progress_handler : uploadProgress, upload_error_handler : uploadError, upload_success_handler : uploadSuccess, upload_complete_handler : uploadComplete, // Button Settings button_image_url : "../we_modules/images/SmallSpyGlassWithTransperancy_17x18.png", // Relative to the SWF file button_placeholder_id : "spanButtonPlaceholder", button_width: 180, button_height: 18, button_text : 'Select Files(2 MB Max)', button_text_style : '.button { font-family: Helvetica, Arial, sans-serif; font-size: 12pt;cursor:pointer } .buttonSmall { font-size: 10pt; }', button_text_top_padding: 0, button_text_left_padding: 18, button_window_mode: SWFUpload.WINDOW_MODE.TRANSPARENT, button_cursor: SWFUpload.CURSOR.HAND, // Flash Settings flash_url : "../swfupload/swfupload.swf", custom_settings : { upload_target : "divFileProgressContainer" }, // Debug Settings debug: false }); }; where should i put on the tinymce function as you mention below?

    Read the article

  • How do I ensure that a JPanel Shrinks when the parent frame is resized?

    - by dah
    I have a basic notes panel that I'm looking to shrink the width of when the parent jframe is resized but it isn't happening. I'm using nested gridbaglayouts. package com.protocase.notes.views; import com.protocase.notes.controller.NotesController; import com.protocase.notes.model.Subject; import com.protocase.notes.model.Note; import com.protocase.notes.model.database.PMSNotesAdapter; import java.awt.Color; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import javax.swing.BorderFactory; import javax.swing.JButton; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JScrollPane; /** * @author DavidH */ public class NotesViewer extends JPanel { // <editor-fold defaultstate="collapsed" desc="Attributes"> private Subject subject; private NotesController controller; //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Getters N' Setters"> /** * Gets back the current subject. * @return */ public Subject getSubject() { return subject; } public NotesController getController() { return controller; } public void setController(NotesController controller) { this.controller = controller; } /** * Should clear the panel of the current subject and load the details for * the other object. * @param subject */ public void setSubject(Subject subject) { this.subject = subject; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Constructor"> /** * -- Sets up a note viewer with a subject and a controller. Likely this * would be the constructor used if you were passing off from another * NoteViewer or something else that used a notes adapter or controller. * @param subject * @param controller */ public NotesViewer(Subject subject, NotesController controller) { this.subject = subject; this.controller = controller; initComponents(); } /** * -- Sets up a note view with a subject and creates a new controller. This * would be the constructor typically chosen if choosing notes was * infrequent and only one or two notes needs to be displayed. * @param subject */ public NotesViewer(Subject subject) { this(subject, new NotesController(new PMSNotesAdapter())); } /** * -- Sets up a note view without a subject and creates a new controller. * This would be for a note viewer without any notes, perhaps populating * as you choose values in another form. * @param subject */ public NotesViewer() { this(null); } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="initComponents()"> /** * Sets up the view for the NotesViewer */ private void initComponents() { // -- Make a new panel for the header JPanel panel = new JPanel(); panel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.gridx = 0; c.fill = GridBagConstraints.HORIZONTAL; c.gridy = 0; c.weightx = .5; //c.anchor = GridBagConstraints.NORTHWEST; JLabel label = new JLabel("Viewing Notes for [Subject]"); label.setAlignmentX(JLabel.LEFT_ALIGNMENT); label.setBorder(BorderFactory.createLineBorder(Color.YELLOW)); panel.add(label); JButton newNoteButton = new JButton("New"); c = new GridBagConstraints(); // c.fill = GridBagConstraints.HORIZONTAL; c.gridx = 1; c.gridy = 0; c.weightx = .5; c.anchor = GridBagConstraints.EAST; panel.add(newNoteButton, c); // -- NotePanels c = new GridBagConstraints(); c.fill = GridBagConstraints.HORIZONTAL; c.weightx = 1; c.weighty = 1; c.gridx = 0; c.gridwidth = 2; int y = 1; for (Note n : subject.getNotes()) { c.gridy = y++; panel.add(new NotesPanel(n, controller), c); } this.setLayout(new GridBagLayout()); GridBagConstraints pc = new GridBagConstraints(); pc.gridx = 0; pc.gridy = 0; pc.weightx = 1; pc.weighty = 1; pc.fill = GridBagConstraints.BOTH; panel.setBackground(Color.blue); JScrollPane scroll = new JScrollPane(); scroll.setViewportView(panel); //scroll.setHorizontalScrollBarPolicy(JScrollPane.HORIZONTAL_SCROLLBAR_NEVER); this.add(scroll, pc); //this.add(panel, pc); // -- Add it all to the layout } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="private methods"> //</editor-fold> } package com.protocase.notes.views; import com.protocase.notes.controller.NotesController; import com.protocase.notes.model.Note; import java.awt.CardLayout; import java.awt.Color; import java.awt.Component; import java.awt.Dimension; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.BorderFactory; import javax.swing.JButton; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.JTextArea; import javax.swing.JTextField; import javax.swing.border.BevelBorder; import javax.swing.border.Border; import javax.swing.border.MatteBorder; /** * @author dah01 */ public class NotesPanel extends JPanel { // <editor-fold defaultstate="collapsed" desc="Attributes"> private Note note; private NotesController controller; private CardLayout cardLayout; private JTextArea viewTextArea; private JTextArea editTextArea; //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Getters N' Setters"> public NotesController getController() { return controller; } public void setController(NotesController controller) { this.controller = controller; } public Note getNote() { return note; } public void setNote(Note note) { this.note = note; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Constructor"> /** * Sets up a note panel that shows everything about the note. * @param note */ public NotesPanel(Note note, NotesController controller) { this.note = note; cardLayout = new CardLayout(); this.setLayout(cardLayout); // -- Setup the layout manager. this.setBackground(new Color(199, 187, 192)); this.setBorder(new BevelBorder(BevelBorder.RAISED)); // -- ViewPanel this.add("ViewPanel", initViewPanel()); this.add("EditPanel", initEditPanel()); } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="EditPanel"> private JPanel initEditPanel() { JPanel editPanel = new JPanel(); editPanel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.fill = GridBagConstraints.HORIZONTAL; c.gridy = 0; c.weightx = 1; c.weighty = 0.3; editPanel.add(initCreatorLabel(), c); c.gridy++; editPanel.add(initEditTextScroll(), c); c.gridy++; c.anchor = GridBagConstraints.WEST; c.fill = GridBagConstraints.NONE; editPanel.add(initEditorLabel(), c); c.gridx++; c.anchor = GridBagConstraints.EAST; editPanel.add(initSaveButton(), c); return editPanel; } private JScrollPane initEditTextScroll() { this.editTextArea = new JTextArea(note.getContents()); editTextArea.setLineWrap(true); editTextArea.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(editTextArea); scrollPane.setAlignmentX(JScrollPane.LEFT_ALIGNMENT); Border b = scrollPane.getViewportBorder(); MatteBorder mb = BorderFactory.createMatteBorder(2, 2, 2, 2, Color.BLUE); scrollPane.setBorder(mb); return scrollPane; } private JButton initSaveButton() { final CardLayout l = this.cardLayout; final JPanel p = this; final NotesController c = this.controller; final Note n = this.note; final JTextArea noteText = this.viewTextArea; final JTextArea textToSubmit = this.editTextArea; ActionListener al = new ActionListener() { @Override public void actionPerformed(ActionEvent e) { //controller.saveNote(n); noteText.setText(textToSubmit.getText()); l.next(p); } }; JButton saveButton = new JButton("Save"); saveButton.addActionListener(al); saveButton.setPreferredSize(new Dimension(62, 26)); return saveButton; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="ViewPanel"> private JPanel initViewPanel() { JPanel viewPanel = new JPanel(); viewPanel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.fill = GridBagConstraints.HORIZONTAL ; c.gridy = 0; c.weightx = 1; c.weighty = 0.3; viewPanel.add(initCreatorLabel(), c); c.gridy++; viewPanel.add(this.initNoteTextArea(), c); c.fill = GridBagConstraints.NONE; c.anchor = GridBagConstraints.WEST; c.gridy++; viewPanel.add(initEditorLabel(), c); c.gridx++; c.anchor = GridBagConstraints.EAST; viewPanel.add(initEditButton(), c); return viewPanel; } private JLabel initCreatorLabel() { DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); if (note != null) { String noteBy = "Note by " + note.getCreator(); String noteCreated = formatter.format(note.getDateCreated()); JLabel creatorLabel = new JLabel(noteBy + " @ " + noteCreated); creatorLabel.setAlignmentX(JLabel.LEFT_ALIGNMENT); return creatorLabel; } else { System.out.println("NOTE IS NULL"); return null; } } private JScrollPane initNoteTextArea() { // -- Setup the notes area. this.viewTextArea = new JTextArea(note.getContents()); viewTextArea.setEditable(false); viewTextArea.setLineWrap(true); viewTextArea.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(viewTextArea); scrollPane.setAlignmentX(JScrollPane.LEFT_ALIGNMENT); return scrollPane; } private JLabel initEditorLabel() { // -- Setup the edited by label. JLabel editorLabel = new JLabel(" -- Last edited by " + note.getLastEdited() + " at " + note.getDateModified()); editorLabel.setAlignmentX(Component.LEFT_ALIGNMENT); return editorLabel; } private JButton initEditButton() { final CardLayout l = this.cardLayout; final JPanel p = this; ActionListener ar = new ActionListener() { @Override public void actionPerformed(ActionEvent e) { l.next(p); } }; JButton editButton = new JButton("Edit"); editButton.setPreferredSize(new Dimension(62,26)); editButton.addActionListener(ar); return editButton; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Grow Width When Resized"> @Override public Dimension getPreferredSize() { int fw = this.getParent().getSize().width; int fh = super.getPreferredSize().height; return new Dimension(fw,fh); } //</editor-fold> }

    Read the article

  • SQL SERVER – Retrieve and Explore Database Backup without Restoring Database – Idera virtual databas

    - by pinaldave
    I recently downloaded Idera’s SQL virtual database, and tested it. There are a few things about this tool which caught my attention. My Scenario It is quite common in real life that sometimes observing or retrieving older data is necessary; however, it had changed as time passed by. The full database backup was 40 GB in size, and, to restore it on our production server, it usually takes around 16 to 22 minutes, depending on the load server that is usually present. This range in time varies from one server to another as per the configuration of the computer. Some other issues we used to have are the following: When we try to restore a large 40-GB database, we needed at least that much space on our production server. Once in a while, we even had to make changes in the restored database, and use the said changed and restored database for our purpose, making it more time-consuming. My Solution I have heard a lot about the Idera’s SQL virtual database tool.. Well, right after we started to test this tool, we found out that it really delivers what it promises. Using this software was very easy and we were able to restore our database from backup in less than 2 minutes, sparing us from the usual longer time of 16–22 minutes. The needful was finished in a total of 10 minutes. Another interesting observation is that there is no need to have an additional space for restoring the database. For complete database restoration, the single additional MB on the drive is not required anymore. We can use the database in the same way as our regular database, and there is no need for any additional configuration and setup. Let us look at the most relevant points of this product based on my initial experience: Quick restoration of the database backup No additional space required for database restoration virtual database has no physical .MDF or .LDF The database which is restored is, in fact, the backup file converted in the virtual database. DDL and DML queries can be executed against this virtually restored database. Regular backup operation can be implemented against virtual database, creating a physical .bak file that can be used for future use. There was no observed degradation in performance on the original database as well the restored virtual database. Additional T-SQL queries can be let off on the virtual database. Well, this summarizes my quick review. And, as I was saying, I am very impressed with the product and I plan to explore it more. There are many features that I have noticed in this tool, which I think can be very useful if properly understood. I had taken a few screenshots using my demo database afterwards. Let us see what other things this tool can do besides the mentioned activities. I am surprised with its performance so I want to know how exactly this feature works, specifically in the matter of why it does not create any additional files and yet, it still allows update on the virtually restored database. I guess I will have to send an e-mail to the developers of Idera and try to figure this out from them. I think this tool is very useful, and it delivers a high level of performance way more than what I expected. Soon, I will write a review for additional uses of SQL virtual database.. If you are using SQL virtual database in your production environment, I am eager to learn more about it and your experience while using it. The ‘Virtual’ Part of virtual database When I set out to test this software, I thought virtual database had something to do with Hyper-V or visualization. In fact, the virtual database is a kind of database which shows up in your SQL Server Management Studio without actually restoring or even creating it. This tool creates a database in SSMS from the backup of the same database. The backup, however, works virtually the same way as original database. Potential Usage of virtual database: As soon as I described this tool to my teammate, I think his very first reaction was, “hey, if we have this then there is no need for log shipping.” I find his comment very interesting as log shipping is something where logs are moved to another server. In fact, there are no updates on the database from log; I would rather compare it with Snapshot Replication. In fact, whatever we use, snapshot replicated database can be similarly used and configured with virtual database. I totally believe that we can use it for reporting purpose. In fact, after this database was configured, I think the uses of this tool are unlimited. I will have to spend some more time studying it and will get back to you. Click on images to see larger images. virtual database Console Harddrive Space before virtual database Setup Attach Full Backup Screen Backup on Harddrive Attach Full Backup Screen with Settings virtual database Setup – less than 60 sec virtual database Setup – Online Harddrive Space after virtual database Setup Point in Time Recovery Option – Timeline View virtual database Summary No Performance Difference between Regular DB vs Virtual DB Please note that all SQL Server MVP gets free license of this software. Reference: Pinal Dave (http://blog.SQLAuthority.com), Idera (virtual database) Filed under: Database, Pinal Dave, SQL, SQL Add-On, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLAuthority News, T SQL, Technology Tagged: Idera

    Read the article

  • Cannot Install/Start MySQL Server

    - by Peezy Bro
    Okay, I decided to migrate from MySQL Server 5.5.37 to Percona Server 5.6. I ended up removing MySQL Server by the following: sudo apt-get --purge remove mysql-server mysql-server-5.5 mysql-server-core-5.5 mysql-client mysql-client-core-5.5 mysql-common sudo apt-get autoremove sudo apt-get autoclean rm -rf /var/lib/mysql rm -rf /etc/mysql Now here is my problem, when I try to install MySQL Server 5.6 it goes through its process and when it asks me for a password, it comes up with Cannot set MySQL "root" password. After it "installs" MySQL wont start up and I get permission denied?. Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. brandon@brandon-DB:~$ sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5 Suggested packages: libmldbm-perl libnet-daemon-perl libplrpc-perl libsql-statement-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libmysqlclient18 libterm-readkey-perl mysql-client-5.5 mysql-client-core-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5 0 upgraded, 10 newly installed, 0 to remove and 35 not upgraded. Need to get 0 B/8,955 kB of archives. After this operation, 96.3 MB of additional disk space will be used. Do you want to continue? [Y/n] y Preconfiguring packages ... Selecting previously unselected package mysql-common. (Reading database ... 167760 files and directories currently installed.) Preparing to unpack .../mysql-common_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libmysqlclient18:amd64. Preparing to unpack .../libmysqlclient18_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package libdbi-perl. Preparing to unpack .../libdbi-perl_1.630-1_amd64.deb ... Unpacking libdbi-perl (1.630-1) ... Selecting previously unselected package libdbd-mysql-perl. Preparing to unpack .../libdbd-mysql-perl_4.025-1_amd64.deb ... Unpacking libdbd-mysql-perl (4.025-1) ... Selecting previously unselected package libterm-readkey-perl. Preparing to unpack .../libterm-readkey-perl_2.31-1_amd64.deb ... Unpacking libterm-readkey-perl (2.31-1) ... Selecting previously unselected package mysql-client-core-5.5. Preparing to unpack .../mysql-client-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-client-5.5. Preparing to unpack .../mysql-client-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-core-5.5. Preparing to unpack .../mysql-server-core-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up mysql-common (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server-5.5. (Reading database ... 168116 files and directories currently installed.) Preparing to unpack .../mysql-server-5.5_5.5.37-0ubuntu0.14.04.1_amd64.deb ... Unpacking mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... Selecting previously unselected package mysql-server. Preparing to unpack .../mysql-server_5.5.37-0ubuntu0.14.04.1_all.deb ... Unpacking mysql-server (5.5.37-0ubuntu0.14.04.1) ... Processing triggers for ureadahead (0.100.0-16) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libmysqlclient18:amd64 (5.5.37-0ubuntu0.14.04.1) ... Setting up libdbi-perl (1.630-1) ... Setting up libdbd-mysql-perl (4.025-1) ... Setting up libterm-readkey-perl (2.31-1) ... Setting up mysql-client-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-client-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-core-5.5 (5.5.37-0ubuntu0.14.04.1) ... Setting up mysql-server-5.5 (5.5.37-0ubuntu0.14.04.1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing package mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin (2.19-0ubuntu6) ... No apport report written because the error message indicates its a followup error from a previous failure. Processing triggers for ureadahead (0.100.0-16) ... Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have all my database/tables dumped and on a seperate HDD. This is also a Dev Machine and not my main Production Machine. I also backed up the MySQL_Config and MySQL_Data.

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Compress Large Video Files with DivX / Xvid and AutoGK

    - by DigitalGeekery
    Have you ever recorded home video on a camcorder only to find the video size is enormous? What if you wanted to share a video clip on YouTube or another video sharing site, but the file size was bigger than the maximum upload size? Today we’ll look at a way to compress certain video files, such as MPEG and AVI, with Auto Gordian Knot (AutoGK). AutoGK is a free application that runs on Windows. It supports Mpeg1, Mpeg2, Transport Streams, Vobs, and virtually any codec used for an .AVI file. AutoGK will accept as input the following file types: MPG, MPEG, VOB, VRO, M2V, DAT, IFO, TS, TP, TRP, M2T, and AVI. Files are output as .AVI files and are converted using the DivX or XviD codecs. Installing and Using AutoGK Download and install AutoGK (link below) Open the AutoGK. You’ll need to navigate a few wizard screens, but you can just accept the defaults.   Choose your video file by clicking on the folder to the right of the Input file text box.   Browse for and select your video file and click “Open.”   For this example, we’ll be working with an .AVI file that’s 167MB in size.   The output file is copied into the same directory as the input file by default, but you can change this if you choose. If the input file is also .AVI, AutoGK will append an _agk to the output file so that the original is not overwritten. Next, you’ll see any audio tracks listed. You can unselect the check box if you’d like to remove the audio track. You can choose one of the Predefined size options… Or, select a Custom size in MB or Target Quality in percentage. For our example, we’ll be compressing our 167MB file to 35MB. Click on Advanced Settings. Here you can choose your codec, if you have a preference, as well as output resolution and output audio. If you’d like to use the DivX codec, you’ll need to download and install it separately. (See link below) Typically you’ll want to keep the defaults. Click “OK.” Now you’re ready to add your file conversion job to the Job queue. Click Add Job to add it to the queue. You can add multiple files conversions to the job queue and  convert them in one batch. Click Start to begin the conversion process. The process will begin. You’ll be able to see the progress in the Log window on the bottom left. When the conversion is complete you’ll see a “Job finished” and the total time in the log window.   Check your output file to see it’s compressed size. Test your video just to make sure the output quality is satisfactory.   Note:  Conversion times can vary greatly depending on the size of the file and your computer hardware. Files that are several GBs in size may take several hours to compress. AutoGK is no longer being actively developed but is still a wonderful DivX/XviD conversion tool. It can also be used to compress and convert non-copy protected DVDs. Downloads AutoGordianKnot DivX (optional) Similar Articles Productive Geek Tips Use Your Mac Mini as a Media Server Part 2Make Disk Cleanup Compress Older(or Newer) Files on XPMysticgeek Blog: Exclusive Look Inside Vreel – Including Interview With Vreel Founder!Friday Fun: Watch HD Video Content with MeevidConvert a DVD Movie Directly to AVI with FairUse Wizard 2.9 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • Create a Persistent Bootable Ubuntu USB Flash Drive

    - by Trevor Bekolay
    Don’t feel like reinstalling an antivirus program every time you boot up your Ubuntu flash drive? We’ll show you how to create a bootable Ubuntu flash drive that will remember your settings, installed programs, and more! Previously, we showed you how to create a bootable Ubuntu flash drive that would reset to its initial state every time you booted it up. This is great if you’re worried about messing something up, and want to start fresh every time you start tinkering with Ubuntu. However, if you’re using the Ubuntu flash drive to diagnose and solve problems with your PC, you might find that a lot of problems require guess-and-test cycles. It would be great if the settings you change in Ubuntu and the programs you install stay installed the next time you boot it up. Fortunately, Universal USB Installer, a great little program from Pen Drive Linux, can do just that! Note: You will need a USB drive at least 2 GB large. Make sure you back up any files on the flash drive because this process will format the drive, removing any files currently on it. Once Ubuntu has been installed on the flash drive, you can move those files back if there is enough space. Put Ubuntu on your flash drive Universal-USB-Installer.exe does not need to be installed, so just double click on it to run it wherever you downloaded it. Click Yes if you get a UAC prompt, and you will be greeted with this window. Click I Agree. In the drop-down box on the next screen, select Ubuntu 9.10 Desktop i386. Don’t worry if you normally use 64-bit operating systems – the 32-bit version of Ubuntu 9.10 will still work fine. Some useful tools do not have 64-bit versions, so unless you’re planning on switching to Ubuntu permanently, the 32-bit version will work best. If you don’t have a copy of the Ubuntu 9.10 CD downloaded, then click on the checkbox to Download the ISO. You’ll be prompted to launch a web browser; click Yes. The download should start immediately. When it’s finished, return the the Universal USB Installer and click on Browse to navigate to the ISO file you just downloaded. Click OK and the text field will be populated with the path to the ISO file. Select the drive letter that corresponds to the flash drive that you would like to use from the dropdown box. If you’ve backed up the files on this drive, we recommend checking the box to format the drive. Finally, you have to choose how much space you would like to set aside for the settings and programs that will be stored on the flash drive. Considering that Ubuntu itself only takes up around 700 MB, 1 GB should be plenty, but we’re choosing 2 GB in this example because we have lots of space on this USB drive. Click on the Create button and then make yourself a sandwich – it will take some time to install no matter how fast your PC is. Eventually it will finish. Click Close. Now you have a flash drive that will boot into a fully capable Ubuntu installation, and any changes you make will persist the next time you boot it up! Boot into Ubuntu If you’re not sure how to set your computer to boot using the USB drive, then check out the How to Boot Into Ubuntu section of our previous article on creating bootable USB drives, or refer to your motherboard’s manual. Once your computer is set to boot using the USB drive, you’ll be greeted with splash screen with some options. Press Enter to boot into Ubuntu. The first time you do this, it may take some time to boot up. Fortunately, we’ve found that the process speeds up on subsequent boots. You’ll be greeted with the Ubuntu desktop. Now, if you change settings like the desktop resolution, or install a program, those changes will be permanently stored on the USB drive! We installed avast! Antivirus, and on the next boot, found that it was still in the Accessories menu where we left it. Conclusion We think that a bootable Ubuntu USB flash drive is a great tool to have around in case your PC has problems booting otherwise. By having the changes you make persist, you can customize your Ubuntu installation to be the ultimate computer repair toolkit! Download Universal USB Installer from Pen Drive Linux Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayCreate a Bootable Ubuntu 9.10 USB Flash DriveReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >