Search Results

Search found 24675 results on 987 pages for 'table'.

Page 508/987 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Debian/OVH: How to configure multiple Failover IP on the same Xen (Debian) Virtual Machine?

    - by D.S.
    I have a problem on a Xen virtual machine (running latest Debian), when I try to configure a second failover IP address. OVH reports that my IP is misconfigured and they complaint they receive a massive quantity of ARP packets from this IPs, so they are going to block my IP unless I fix this issue. I suspect there's a routing issue, but I don't know (and can't find any useful info on the provider's website, and their support doesn't provide me a valid solution, just bounce me to their online - useless - guides). My /etc/network/interfaces look like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address AAA.AAA.AAA.AAA netmask 255.255.255.255 broadcast AAA.AAA.AAA.AAA post-up route add 000.000.000.254 dev eth0 post-up route add default default gw 000.000.000.254 dev eth0 # Secondary NIC auto eth0:0 iface eth0:0 inet static address BBB.BBB.BBB.BBB netmask 255.255.255.255 broadcast BBB.BBB.BBB.BBB And the routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 000.000.000.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 0.0.0.0 000.000.000.254 0.0.0.0 UG 0 0 0 eth0 In these examples (true IP addresses are replaced by fake ones, guess why :)), 000.000.000.000 is my main server's IP address (dom0), 000.000.000.254 is the default gateway OVH recommends, AAA.AAA.AAA.AAA is the first IP Failover and BBB.BBB.BBB.BBB is the second one. I need both AAA.AAA.AAA.AAA and BBB.BBB.BBB.BBB to be publicly reachable from Internet and point to my domU, and to be able to access Internet from inside the virtual machine (domU). I am using eth0 and eth0:0 because due to OVH support, I have to assign both IPs to the same MAC address and then create a virtual eth0:0 interface for the second IP. Any suggestion? What am I doing wrong? How can I stop OVH complaining about ARP flood? Many thanks in advance, DS

    Read the article

  • Updated XAMPP with MySQL, all my tables are missing

    - by user371699
    I just updated XAMPP to a newer version, which included updating MySQL from 5.5 to 5.6. Using phpMyAdmin, however, all of my tables within my databases still appear on the left navigation panel, but the main window shows that all my databases are empty (except for information_schema, and a couple other default tables.) Clicking on a table in the navigation panel gives me a "table doesn't exist" message. It does looks like information_schema.tables doesn't have my tables, either. Can anyone assist me with this? I did make a complete backup of all my databases before the upgrade, but I first want to see if I can fix this the "normal" way. Furthermore, I'm not sure if the MySQL upgrade involved making changes to the information/performance databases, so I don't know if I can restore the old ones. Thank you. EDIT: Continuing my searching, I realized that only the INNODB databases are missing. I've tried running the following with no avail: /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp and /opt/lampp/bin $ sudo ./mysql_install_db --basedir=/opt/lampp --datadir=/opt/lampp/var/mysql The my.cnf file in /opt/lampp/etc contains the following InnoDB settings: innodb_data_home_dir = /opt/lampp/var/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_log_group_home_dir = /opt/lampp/var/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 16M # Deprecated in 5.6 #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 What could possibly be wrong? Why is the information_schema not updating correctly? It looks like /opt/lampp/var/mysql has all my tables in it within the database directories, but they're still not showing up in information_schema.

    Read the article

  • Routing data through VPN in linux

    - by Shadyabhi
    I think its a silly question but still here it goes.. Terminal Output: eth0 Link encap:Ethernet HWaddr 00:1c:c0:37:5e:25 inet addr:10.100.98.51 Bcast:10.100.98.255 Mask:255.255.255.0 inet6 addr: fe80::21c:c0ff:fe37:5e25/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:29677 errors:0 dropped:0 overruns:0 frame:0 TX packets:5209 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3179007 (3.1 MB) TX bytes:610142 (610.1 KB) Memory:e0380000-e03a0000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:9555 (9.5 KB) TX bytes:9555 (9.5 KB) vpn_0 Link encap:Ethernet HWaddr 00:ac:39:95:a1:16 inet6 addr: fe80::2ac:39ff:fe95:a116/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1786 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:128597 (128.5 KB) TX bytes:468 (468.0 B) Actually, I followed this tutorial to setup the PacketiX VPN on ubuntu. Now, how do I actually use this VPN? Terminal Output: shadyabhi@shadyabhi-desktop:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 10.100.98.4 0.0.0.0 UG 0 0 0 eth0 shadyabhi@shadyabhi-desktop:~$ As told in tutorial, if I do route del default route add default dev vpn_0 I am not able to surf the internet. And I get the route command output as: root@shadyabhi-desktop:/home/shadyabhi# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default * 0.0.0.0 U 0 0 0 vpn_0 root@shadyabhi-desktop:/home/shadyabhi# I know I am not able to route the traffic properly. How do i do that?

    Read the article

  • Why am I seeing MailSlot Browse messages on unrouted ports of my Linux box?

    - by nmichaels
    I have a Linux box (Debian squeeze) with several NICs. The ones of interest are: eth3 - my main link to the network (dhcp on 10.20.30.0/24) eth0 - the first connection to my test network (static: 192.168.1.2) eth4 - the second connection to my test network (static: 192.168.1.1) My routing table looks like this: $ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.20.30.0 * 255.255.255.0 U 0 0 0 eth3 default 10.20.30.254 0.0.0.0 UG 0 0 0 eth3 I have the 2 test net ports connected to each other with a crossover cable and an instance of wireshark running on each port. Every once in a while, I'll see a packet like the following show up. Who could be doing this, and how do I convince them to stop? I do have Samba running on the machine (for a cifs mount) but don't see why it would be sending packets out to unrouted ports. I had a Windows VM running in VMWare Client and thought that might be causing it, but it still happens without it. What I want is totally silent interfaces so I can run some tests with Scapy over them.

    Read the article

  • Mysql ndb cluster - node restart.

    - by Arafat
    Hi guys! I just setup a mysql cluster on a fairly decent baby (IBM x3650 M3) with 24GB memory, xeon 6core, SAS 6Gbps HDD. Running Debian Lenny 5. 64bits. Ndb version is 7.1.9a. Our database size on MyISAM is around 3.2 GB. Ndb_size estimation is 58GB for ndbengine. A little info about my database is as follows. 150 common tables for global purpose. 130 tables for each clients. So it goes like this, 130 x 115(clients) = 14950 tables. Is it normal or usual to have 14000 tables on one database? The reasons why we did this was, Easy maintenance and per client based customization. Now, the problem is, ndb cluster can only support, 20320 tables. But it can support 5,000,000,000 rows in one table if I'm not wrong. My real head ache is my cluster data node takes less than two minutes to startup with out any data. But as soon as convert my tables into ndb, that too only 2000 tables, data node takes at least 30 to 40 mins to start up. Is it normal? If I convertt all my tables into ndb, will it take even longer? Or let's say if consolidate my 14000 table's data into one, which is 130 tables, will it help? Or is there anything idiotically wrong which I'm doing? I'll attach my config.ini file soon. here's the simple overview of my config Datamemory = 14G Indexmemory = 3GB Maxnooftable = 14000 Maxnoofattributes = 78000 I'm just testing these values with 2000 tables first. Please advise, how to increase the start up speed. Please point out where I'm going wrong. Thanks in advance guys!

    Read the article

  • Sorting/grouping when there are multiple values in one cell

    - by ngm
    I have an Excel 2007 spreadsheet, where each row of the dataset describes a feature of a piece of software. One of the columns in the spreadsheet is Relevant Users, which describes which users of the software the feature is of interest to. There may be a couple of different users interested in a feature, in which case I've been filling in the cell with the two user types separated by a colon, e.g. 'Usertype A; Usertype D'. Occassionally, I'd like to sort my data by the Relevant Users column. However, the way I'm populating the column means the sorting isn't very smart. If I have a feature where 'Relevant Users' is 'Usertype A; Usertype D', and then I sort by Relevant Users, that feature will be grouped at the end of all the other features of relevant to Usertype A, as it's just sorting alphabetically. But I want it to be listed in the two separate groups of Usertype A and Usertype D. Or, if I have a pivot table that groups the features together under the heading of Relevant User, I'll get all the features for 'Usertype A', then 'Usertype B', then 'Usertype C', then 'Usertype D', then 'Usertype A; Usertype D', etc. Whereas I really want a feature with Relevant Users as 'Usertype A; Usertype D' to show up in both the Usertype A group and the Usertype D group. I guess if this information was in a database I might have a many-to-many table linking Relevant Users to features. But is there a way to go about having this kind of many-to-many relationship in Excel?

    Read the article

  • Need help with MS Access 07 & Reports

    - by Moe
    Hey there, I'm finding it difficult to get MS reporting working to what I'd like to show. What I'm trying to do is: a) In my database store a URL file (HTTP external file), that is a .jpeg. I'd like to use that URL to call the image on the report sheet. I have tried to use 'Control source' on the data panel, but with no success. Any way I can get Dynamic Images to show up on each database. Also, I have a couple of Relational Databases. One Defines Values: For Example: DefinePets('petID','Name of Pet') The other one links the Main DB with the 'DefinePets' database. Eg: connect('petID','mainID','extraFeild') I'd like my report to Go into the "connect" Table, where the the currently viewed Record Value = mainID, then find petID and return Name of Pet. There is a many to many link between definePets and the main Table. (Therefore connect is joining them up) Or is that too much to ask from a simple package like Access? Thanks.

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

  • Excel 2007: Named ranges problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • Kindle (client) for Mac

    - by doug
    So we're clear, i'm talking bout the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I bought a couple of Kindle-edition books and i'm reading them using this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Linux Uninstalling errors

    - by Zack
    I want to uninstall back-track 5 so I deleted the partitions for back-track os. After deleting the partition that used to be for back-track becomes free space as in picture. But I can't delete that partition nor creating a new partition. I used G-Parted from hiren boot cd but it says there is no partition table, you need to create a partition table. But actually I have 5 partitions already. And I thought of restarting might fix it. But after showing post screen my laptop show grub error. I don't know what to do, and I tried to install back-track again to fix the problem but it also says that i do not have any partitions. I can only boot windows by passing through hiren boot cd. But most of the time My computer is not recognizing the external dvd drive, nor the internal so i have to restart again and again, hoping to catch the time computer recognize the dvd drive. Can I change the boot loader to correct the grub error? SOLVED : I have solved the grub error by writing MBR again by using EasyBCD But I still have the format error.

    Read the article

  • Change font size for "Default Paragraph Font" in Word

    - by Richard Gadsden
    I have a document where the built-in style "Default Paragraph Font" has been set to a particular size. It shouldn't have a size - it should be inheriting from the paragraph style (that's the whole point of the style). If I go through the user interface, I can't modify this style (the modify button / dropdown is greyed out) While I can work around this in most places, it creates problems for the Table of Contents in particular, as that is forced to be in this style and it overrides the font size from the styles like TOC 1 (etc). I can set the font size through VBA - ActiveDocument.Styles("Default Paragraph Font").Font.size = 10 sets it to ten point, but I can't work out how to reset it back to inherit. At the moment, my table of contents is set to be all in the same size, but really TOC 1 should be bigger than TOC 2. Does anyone have any suggestions for how to fix this? One approach is to use the organizer to copy over the style from a working document, but ideally I'd like a way to resolve the problem without doing that - especially as that's not an easy approach to automate.

    Read the article

  • Help! The log file for database 'tempdb' is full. Back up the transaction log for the database to fr

    - by michael.lukatchik
    We're running SQL Server 2000. In our database, we have an "Orders" table with approximately 750,000 rows. We can perform simple SELECT statements on this table. However, when we want to run a query like SELECT TOP 100 * FROM Orders ORDER BY Date_Ordered DESC, we receive the following message: Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space. We have other tables in our database which are similar in size of the amount of records that are in the tables (i.e. 700,000 records). On these tables, we can run any queries we'd like and we never receive a message about 'tempdb being full'. To resolve this, we've backed up our database, shrunk the actual database and also shrunk the database and files in the tempdb system database, but this hasn't resolved the issue. The size of our log file is set to autogrow. We're not sure where to go next. Are there any ideas why we still might be receiving this message? Error: 9002, Severity: 17, State: 6 The log file for database 'tempdb' is full. Back up the transaction log for the database to free up some log space.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

  • Excael 2007: Name range problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • How to prevent samba from holding a file lock after a client disconnects?

    - by Jean-Francois Chevrette
    Here I have a Samba server (Debian 5.0) thats is configured to host Windows XP profiles. Clients connects to this server and work on their profiles directly on the samba share (the profile is not copied locally). Every now and then, a client may not shutdown properly and thus Windows does not free the file locks. When looking at the samba locking table, we can see that many files are still locked even though the client is not connected anymore. In our case, this seems to occur with lockfiles created by Mozilla Thunderbird and Firefox. Here's an example of the samba locking table: # smbstatus -L | grep DENY_ALL | head -n5 Pid Uid DenyMode Access R/W Oplock SharePath Name Time -------------------------------------------------------------------------------------------------- 15494 10345 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user1 app.profile/user1.thunderbird/parent.lock Mon Nov 22 07:12:45 2010 18040 10454 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user2 app.profile/user2.thunderbird/parent.lock Mon Nov 22 11:20:45 2010 26466 10056 DENY_ALL 0x3019f RDWR EXCLUSIVE+BATCH /home/CORP/user3 app.profile/user3.firefox/parent.lock Mon Nov 22 08:48:23 2010 We can see that the files were opened by Windows and imposed a DENY_ALL lock. Now when a client reconnects to this share and tries to open those files, samba says that they are locked and denies access. Is there any way to work around this situation or am I missing something? Edit: We would like to avoid disabling file locks on the samba server because there are good reasons to have those enabled.

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • Redmine plug-in fails at rake db:migrate_plugins

    - by Drew
    Hey, First post, so hope I'm in the right place. While trying to install the Redmine plug-in 'Wiki Extensions', I keep getting stuck when I try to run the "rake db:migrate_plugins RAILS_ENV=production" command. I am moving server and I'm in bit over my head. Haven't found anything on Google that has helped me much, though I might have missed something. I have pasted in the output with --trace: (in /srv/www/vastpark.org/redmine) ** Invoke db:migrate_plugins (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate_plugins Migrating engines... Migrating acts_as_activity_provider... Migrating acts_as_attachable... Migrating acts_as_customizable... Migrating acts_as_event... Migrating acts_as_list... Migrating acts_as_searchable... Migrating acts_as_tree... Migrating acts_as_versioned... Migrating acts_as_watchable... Migrating awesome_nested_set... Migrating classic_pagination... Migrating coderay-0.9.2... Migrating gravatar... Migrating open_id_authentication... Migrating prepend_engine_views... Migrating redmine_wiki_extensions... == CreateWikiExtensionsComments: migrating =================================== -- create_table(:wiki_extensions_comments) rake aborted! An error has occurred, all later migrations canceled: Mysql::Error: Table 'wiki_extensions_comments' already exists: CREATE TABLE 'wiki_extensions_comments' ('id' int(11) DEFAULT NULL auto_increment PRIMARY KEY, 'wiki_page_id' int(11), 'key_word' varchar(255), 'user_id' int(11), 'comment' text, 'created_at' datetime, 'updated_at' datetime) ENGINE=InnoDB

    Read the article

  • Coffee spilled and went inside CPU...computer not starting

    - by Harpreet
    Today coffee got spilled over my table, and some of it (very less) reached the CPU placed under the table. I think little bit of it got inside the CPU through the front face of the CPU. As that happened the fan started running very fast and made noise. I tried to restart to see if it becomes fine, but the computer didn't start again. First it gave an error of "Alert! Air temperature sensor not detected" and didn't start. Next I tried again multiple times of starting the computer but then it gave some memory error. I was not able to start the computer. Incase there's a problem in hard disk or something related to memory, is there any way we can extract our work or data? I am scared if I am not able to extract my work in case some problem occurs like that. What options would I have? Help! EDIT: I have attached the photo here and you can see the area spilt in red circle. The hard drive electronics have been affected and internal speaker may also have been affected. Any advise on cleaning and if hard drive can work? EDIT 2: Are there any professional services offered to extract data from blemished hard disk, like this one, in case I am not able to run it personally?

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • SQL Query to update parent record with child record values

    - by Wells
    I need to create a Trigger that fires when a child record (Codes) is added, updated or deleted. The Trigger stuffs a string of comma separated Code values from all child records (Codes) into a single field in the parent record (Projects) of the added, updated or deleted child record. I am stuck on writing a correct query to retrieve the Code values from just those child records that are the children of a single parent record. -- Create the test tables CREATE TABLE projects ( ProjectId varchar(16) PRIMARY KEY, ProjectName varchar(100), Codestring nvarchar(100) ) GO CREATE TABLE prcodes ( CodeId varchar(16) PRIMARY KEY, Code varchar (4), ProjectId varchar(16) ) GO -- Add sample data to tables: Two projects records, one with 3 child records, the other with 2. INSERT INTO projects (ProjectId, ProjectName) SELECT '101','Smith' UNION ALL SELECT '102','Jones' GO INSERT INTO prcodes (CodeId, Code, ProjectId) SELECT 'A1','Blue', '101' UNION ALL SELECT 'A2','Pink', '101' UNION ALL SELECT 'A3','Gray', '101' UNION ALL SELECT 'A4','Blue', '102' UNION ALL SELECT 'A5','Gray', '102' GO I am stuck on how to create a correct Update query. Can you help fix this query? -- Partially working, but stuffs all values, not just values from chile (prcodes) records of parent (projects) UPDATE proj SET proj.Codestring = (SELECT STUFF((SELECT ',' + prc.Code FROM projects proj INNER JOIN prcodes prc ON proj.ProjectId = prc.ProjectId ORDER BY 1 ASC FOR XML PATH('')),1, 1, '')) The result I get for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Blue,Gray,Gray,Pink ... But the result I need for the Codestring field in Projects is: ProjectId ProjectName Codestring 101 Smith Blue,Pink,Gray ... Here is my start on the Trigger. The Update query, above, will be added to this Trigger. Can you help me complete the Trigger creation query? CREATE TRIGGER Update_Codestring ON prcodes AFTER INSERT, UPDATE, DELETE AS WITH CTE AS ( select ProjectId from inserted union select ProjectId from deleted )

    Read the article

  • Dovecot unable to perform mysql query

    - by NathanJ2012
    I have been following the ISPMail tutorials on workaround.org (the 2.9 Wheezy version) and thus far everything has been working fine. When I reached the step to "Testing email delivery" step I noticed a error about the query in the output log from /var/log/mail.log. May 14 06:48:59 mail postfix/pickup[17704]: EA4AD240A98: uid=0 from=<root> May 14 06:48:59 mail postfix/cleanup[17776]: EA4AD240A98: message-id=<[email protected]> May 14 06:48:59 mail postfix/qmgr[17706]: EA4AD240A98: from=<[email protected]>, size=429, nrcpt=1 (queue active) May 14 06:49:00 mail dovecot: auth-worker(17782): mysql(127.0.0.1): Connected to database mailserver May 14 06:49:00 mail dovecot: auth-worker(17782): Warning: mysql: Query failed, retrying: Table 'mailserver.users' doesn't exist May 14 06:49:00 mail dovecot: auth-worker(17782): Error: sql([email protected]): User query failed: Table 'mailserver.users' doesn't exist (using built-in default user_query: SELECT home, uid, gid FROM users WHERE username = '%n' AND domain = '%d') May 14 06:49:00 mail dovecot: lda([email protected]): msgid=<[email protected]>: saved mail to INBOX May 14 06:49:00 mail postfix/pipe[17780]: EA4AD240A98: to=<[email protected]>, relay=dovecot, delay=0.09, delays=0.03/0.01/0/0.06, dsn=2.0.0, status=sent (delivered via dovecot service) May 14 06:49:00 mail postfix/qmgr[17706]: EA4AD240A98: removed I found this rather interesting that it isn't finding the DB so I went back through and checked EVERY file that I touched that involved the DB (including the postfix cf files) and everything is correct so I am baffled at this point, but oddly enough it would seem the email still made it to the correct destination in /var/vmail/domain.com/. Should I be worried about this or am I missing something here? Since it is a message from dovecot it would be the query from dovecot-sql.conf.ext which I am including here driver = mysql connect = host=127.0.0.1 dbname=mailserver user=blocked password=***REMOVED*** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

    Read the article

  • How can I recover XFS partitions from a formatted HD?

    - by giuprivite
    I deleted the partition table of my HD. I wanted to format another one, but by mistake, I formatted the wrong one. Then I also created some new partition on it. Now I would like, if possible, to recover my old data. The old configuration was this: A primary NTFS partition with Windows, and a secondary partition with four logical partitions: a swap and three XFS partitions (two for Ubuntu and OpenSuSE, and one with the home for both systems). This is the output I get when I run gpart in a terminal: ubuntu@ubuntu:~$ sudo gpart /dev/sdb Begin scan... Possible partition(Windows NT/W2K FS), size(39997mb), offset(0mb) Possible extended partition at offset(39997mb) Possible partition(Linux swap), size(8189mb), offset(39997mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(48187mb) Possible partition(SGI XFS filesystem), size(40942mb), offset(89149mb) Possible partition(SGI XFS filesystem), size(175044mb), offset(130112mb) End scan. Checking partitions... Partition(OS/2 HPFS, NTFS, QNX or Advanced UNIX): primary Partition(Linux swap or Solaris/x86): logical Partition(Linux ext2 filesystem): logical Partition(Linux ext2 filesystem): orphaned logical Partition(Linux ext2 filesystem): orphaned logical Ok. Guessed primary partition table: Primary partition(1) type: 007(0x07)(OS/2 HPFS, NTFS, QNX or Advanced UNIX) size: 39997mb #s(81915360) s(63-81915422) chs: (0/1/1)-(1023/254/63)d (0/1/1)-(5098/254/51)r Primary partition(2) type: 015(0x0F)(Extended DOS, LBA) size: 265245mb #s(543221849) s(81915435-625137283) chs: (1023/254/63)-(1023/254/63)d (5099/0/1)-(38912/254/2)r Primary partition(3) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Primary partition(4) type: 000(0x00)(unused) size: 0mb #s(0) s(0-0) chs: (0/0/0)-(0/0/0)d (0/0/0)-(0/0/0)r Looking the first eight lines, it seems the data are still there... but I don't know how to recover them. I have a free second HD of about 500 GB (the formatted one is 320 GB) that I can use for the recovery process.

    Read the article

  • How would I force Debian to use the physical sector size on a hard disk?

    - by Confused User
    I just purchased a few new 3TB WD drives. These have physical 4k sectors, but there is some sort of layer which is providing 512B logical sectors (see the partition table below). In order to attempt to get some more speed out of my hard drives, I would like to get rid of this logical layer and actually use the physical 4k sectors. However, I can't figure out how to do this (or even if it's possible) from the man pages of fdisk and parted, or from searching Google. Does anybody know how this could be done? As to why this is relevant, this page demonstrates that meerly aligning the sectors properly can already make up to a 25% speed difference for reads, and more than 2500% for writes in some cases! Getting rid of the logical sectors in favor of the physicals ones should improve speeds even more. Thanks! $ parted /dev/sdc GNU Parted 2.3 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB zfs 9 3001GB 3001GB 8389kB P.S. I don't care about the data on the drives, I was just playing with different file systems. Also, this is my first time posting here, so please let me know if my posts should be formatted differently, etc.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >