Search Results

Search found 58901 results on 2357 pages for 'structured data'.

Page 540/2357 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Mac OS X Server Configure DHCP Options 66 and 67

    - by Paul Adams
    I need to configure Mountain Lion (10.8.2) OS X Server BOOTP to provide DHCP options 66 and 67 to provide PXE booting for PCs on my network. I have tried following the bootpd MAN pages, but they are not specific enough. I have also read conflicting information on the net, but nothing definitive for Mountain Lion DHCP. From bootpd man page: bootpd has a built-in type conversion table for many more options, mostly those specified in RFC 2132, and will try to convert from whatever type the option appears in the property list to the binary, packet format. For example, if bootpd knows that the type of the option is an IP address or list of IP addresses, it converts from the string form of the IP address to the binary, network byte order numeric value. If the type of the option is a numeric value, it converts from string, integer, or boolean, to the proper sized, network byte-order numeric value. Regardless of whether bootpd knows the type of the option or not, you can always specify the DHCP option using the data property list type <key>dhcp_option_128</key> <data> AAqV1Tzo </data> My TFTP server is 172.16.152.20 and the bootfile is pxelinux.0 I have edited /etc/bootpd.plist and added the following to the subnet dict: <key>dhcp_option_66</key> <data> LW4gLWUgrBCYFAo= </data> <key>dhcp_option_67</key> <data> LW4gLWUgcHhlbGludXguMAo= </data> According to the man page, the data elements are supposed to be Base64 encoded, but no matter what I try, I cannot get PXE clients to boot. I have tried encoding 172.16.152.20 using various methods: echo "172.16.152.20" | openssl enc -base64 returns MTcyLjE2LjE1Mi4yMAo= DHCP Option Code Utility (http://mac.softpedia.com/get/Internet-Utilities/DHCP-Option-Code-Utility.shtml) generating a string from 172.16.152.20 yields: LW4gLWUgMTcyLjE2LjE1Mi4yMAo= (used in the above example) DHCP Option Code Utility generating an IP Addresss from 172.16.152.20 yields: LW4gLWUgrBCYFAo= Encoding pxelinux.0 with the above methods likewise yields different encodings. I have tried using all three methods of encoding the data elements, but nothing seems to work i.e. my PXE boot clients do not get directed to my TFTP server. Can anyone help? Regards, Paul Adams.

    Read the article

  • How can I create proportionally-sized pie charts side-by-side in Excel 2007?

    - by Andrew Doran
    I have a pivot table with two sets of data as follows: 2011 2012 Slice A 45 20 Slice B 33 28 Slice C 22 2 I am trying to present two pie charts side-by-side, one with the 2011 data and one with the 2012 data. I want the relative size of each pie chart to reflect the totals, i.e. the pie chart with the 2011 data (totalling 100) should be twice the size of the pie chart with the 2012 data (totalling 50). The 'pie of pie' chart type seems to be closest to what I am looking for but this breaks out data from one slice and presents it in a second diagram so it isn't appropriate here.

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • heimdal kerberos in openldap issue

    - by Brian
    I think I posted this on the wrong 'sister site', so here it is. I'm having a bit of trouble getting Kerberos (Heimdal version) to work nicely with OpenLDAP. The kerberos database is being stored in LDAP itself. The KDC uses SASL EXTERNAL authentication as root to access the container ou. I created the database in LDAP fine using kadmin -l, but it won't let me use kadmin without the -l flag: root@rds0:~# kadmin -l kadmin> list * krbtgt/REALM kadmin/changepw kadmin/admin changepw/kerberos kadmin/hprop WELLKNOWN/ANONYMOUS WELLKNOWN/org.h5l.fast-cookie@WELLKNOWN:ORG.H5L default brian.empson brian.empson/admin host/rds0.example.net ldap/rds0.example.net host/localhost kadmin> exit root@rds0:~# kadmin kadmin> list * brian.empson/admin@REALM's Password: <----- With right password kadmin: kadm5_get_principals: Key table entry not found kadmin> list * brian.empson/admin@REALM's Password: <------ With wrong password kadmin: kadm5_get_principals: Already tried ENC-TS-info, looping kadmin> I can get tickets without a problem: root@rds0:~# klist Credentials cache: FILE:/tmp/krb5cc_0 Principal: brian.empson@REALM Issued Expires Principal Nov 11 14:14:40 2012 Nov 12 00:14:37 2012 krbtgt/REALM@REALM Nov 11 14:40:35 2012 Nov 12 00:14:37 2012 ldap/rds0.example.net@REALM But I can't seem to change my own password without kadmin -l: root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Right password New password: Verify password - New password: Auth error : Authentication failed root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Wrong password kpasswd: krb5_get_init_creds: Already tried ENC-TS-info, looping kadmin's logs are not helpful at all: 2012-11-11T13:48:33 krb5_recvauth: Key table entry not found 2012-11-11T13:51:18 krb5_recvauth: Key table entry not found 2012-11-11T13:53:02 krb5_recvauth: Key table entry not found 2012-11-11T14:16:34 krb5_recvauth: Key table entry not found 2012-11-11T14:20:24 krb5_recvauth: Key table entry not found 2012-11-11T14:20:44 krb5_recvauth: Key table entry not found 2012-11-11T14:21:29 krb5_recvauth: Key table entry not found 2012-11-11T14:21:46 krb5_recvauth: Key table entry not found 2012-11-11T14:23:09 krb5_recvauth: Key table entry not found 2012-11-11T14:45:39 krb5_recvauth: Key table entry not found The KDC reports that both accounts succeed in authenticating: 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:48:03 sending 294 bytes to IPv4:192.168.72.10 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 ENC-TS Pre-authentication succeeded -- brian.empson@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 ENC-TS pre-authentication succeeded -- brian.empson@REALM 2012-11-11T14:48:03 AS-REQ authtime: 2012-11-11T14:48:03 starttime: unset endtime: 2012-11-11T14:53:00 renew till: unset 2012-11-11T14:48:03 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 sending 704 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:45:39 sending 303 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 ENC-TS Pre-authentication succeeded -- brian.empson/admin@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 ENC-TS pre-authentication succeeded -- brian.empson/admin@REALM 2012-11-11T14:45:39 AS-REQ authtime: 2012-11-11T14:45:39 starttime: unset endtime: 2012-11-11T15:45:39 renew till: unset 2012-11-11T14:45:39 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 sending 717 bytes to IPv4:192.168.72.10 I wish I had more detailed logging messages, running kadmind in debug mode seems to almost work but it just kicks me back to the shell when I type in the correct password. GSSAPI via LDAP doesn't work either, but I suspect it's because some parts of kerberos aren't working either: root@rds0:~# ldapsearch -Y GSSAPI -H ldaps:/// -b "o=mybase" o=mybase SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () root@rds0:~# ldapsearch -Y EXTERNAL -H ldapi:/// -b "o=mybase" o=mybase SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 # extended LDIF <snip> Would anyone be able to point me in the right direction?

    Read the article

  • nTop RRD file architecture

    - by Seanny123
    I have a gig of nTop RRD files and I would like to start graphing them with rrdtool (but not with nTop, since I'm hoping to do this with a separate backup of the database as workaround to the impossibility of limiting the RRD files by size), but I don't know how the files are structured. I've tried reading the RRD documentation from SourceForge and the nTop FAQ, but I'm not finding the information I need. Does anyone know of any documentation I should be looking at or how the files are structured? Here https://dl.dropbox.com/u/669437/file%20structure.png is a screenshot of the file structure. At first I thought it was organized by IP address (so the rrd files for address 1.1.2.3 would be stored in folder 1-1-2-3 or even the reverse order), but that doesn't seem to be the case. It isn't organized by MAC address either, although some hosts are saved that way. Any help would be appreciated.

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • CIFS(Samba) + ACL = not working

    - by tst
    I have two servers with Debian 5.0. server1: samba 2:3.2.5-4lenny9 smbfs 2:3.2.5-4lenny9 smb.conf: [test] comment = test path = /var/www/_test/ browseable = no only guest = yes writable = yes printable = no create mask = 0644 directory mask = 0755 server1:~# mount | grep sda3 /dev/sda3 on /var/www type ext3 (rw,acl,user_xattr) # getfacl /var/www/_test/ # file: var/www/_test/ # owner: www-data # group: www-data user::rwx group::rwx other::r-x default:user::rwx default:user:www-data:rw- default:user:testuser:rw- default:group::rwx default:mask::rwx default:other::r-x server2: samba-common 2:3.2.5-4lenny9 smbfs 2:3.2.5-4lenny9 server2:~# mount.cifs //server1/test /media/smb/test -o rw,user_xattr,acl server2:~# mount | grep test //server1/test on /media/smb/test type cifs (rw,mand) server2:~# getfacl /media/smb/test/ # file: media/smb/test/ # owner: www-data # group: www-data user::rwx group::rwx other::r-x default:user::rwx default:user:www-data:rw- default:user:testuser:rw- default:group::rwx default:mask::rwx default:other::r-x And there is the problem: server2:~# su - testuser (reverse-i-search)`touch': touch 123 testuser@server2:~$ touch /media/smb/ testuser@server2:~$ touch /media/smb/test/123 touch: cannot touch `/media/smb/test/123': Permission denied Whats wrong?!

    Read the article

  • Why is a ZFS pool not persisting over server restart?

    - by Chance
    I have a ZFS pool with 4 drives. It also has a 3gb ZIL and a 20GB L2ARC that are each partitions on an SDD that doubles as my Linux Mint (ver. 13) boot drive. The pool is mounted to /data. The problem I am running into is that when I restart the server the pool/directory is completely wiped despite having data in it prior. I'm afraid I'm doing something wrong in the setup, which leads me to the following questions: What would cause this? Is there anyway to get the data back? How do I stop it from happening in the future? Thank you in advance! pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sda1 ONLINE 0 0 0 sdb1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdd1 ONLINE 0 0 0 logs sde4 ONLINE 0 0 0 cache sde3 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • NFS Client reports Permission Denied, Server reports Permission Granted

    - by VxJasonxV
    I have two RedHat 4 Servers. The client is 4.6, the server is 4.5. I'm attempting to mount a share from the server, onto the client via NFS. The /etc/exports configuration is as follows: /opt/data/config bkup(rw,no_root_squash,async) /opt/data/db bkup(rw,no_root_squash,async) exportfs returns these (among other) shares, nfs is running according to ps output. I've been attempting to use autofs on the client, but have opted to just mount the share manually considering the issues I'm having. So, I issue the mount request: mount dist:/opt/data/config /mnt/config mount: dist:/opt/data/config failed, reason given by server: Permission denied Ok, so let's see what the server has to say for itself. May 6 23:17:55 dist mountd[3782]: authenticated mount request from bkup:662 for /opt/data/config (/opt/data/config) It says it allowed the mount to take place. How can I diagnose why the client and server are disagreeing on the result?

    Read the article

  • Accessing through VPN, which internet connection would be used

    - by Sriram
    I've a data card which has a limit of 2 GB up/download traffic per month. I've an office internet line which has an unlimited up/download. I've successfully connected to VPN using my data card and by changing certain configurations like DNS etc., have also been able to use my office line for internet (verified by doing a IP check - returns the static IP of our office). Now to my question - is it just a NAT which is happening or am I actually using my office line for all communication. Which one would reflect the usage/trace. The data card usage log at this moment does not reflect any usage (which is confusing since the VPN is over the data card connection). Further more (theoretically), would the net be any faster if my office line is let us say 8Mbps and the data card is 512kbps by doing this as against directly connecting the internet using the data card.

    Read the article

  • Concatenate cells that change daily automatically?

    - by Harold
    I use concatenate to pull data together from different cells in my spreadsheet. Since my data changes daily, I want the formula to also change daily without having to manually input the new cell in the concatenate formula. I am looking for a way to do this but not sure how. Can anyone out there help me out please!? I appreciate the assistance in advance! Maybe this will help to explain what I need. I have a row of data from D4:AH4 that I insert daily based on the new day. When I use the concatenate and us the following formula: =CONCATENATE(TEXT('Raw Data'!B4,"m/d")," ",TEXT('Raw Data'!C4,"")," ", TEXT('Raw Data'!E4,"0.0%"))... E4 being the cell that changes daily where next day would be F4, G4, etc... All other parts of the formula will stay the same. I hope this helps! Thanks! :)

    Read the article

  • Linux software raid robustness for raid1 vs other raid levels

    - by Waxhead
    I have a raid5 running and now also a raid1 that I set up yesterday. Since raid5 calculates parity it should be able to catch silent data corruption on one disk. However for raid1 the disks are just mirrors. The more I think about it I figure that raid1 is actually quite risky. Sure it will save me from a disk failure but i might not be as good when it comes to protecting the data on disk (who is actually more important for me). How does Linux software raid actually store raid1 type data on disk? How does it know what spindle is giving corrupt data (if the disk(subsystem) is not reporting any errors) If raid1 really is not giving me data protection but rather disk protection is there some tricks I can do with mdadm to create a two disk "raid5 like" setup? e.g. loose capacity but still keep redundancy also for data!?

    Read the article

  • rename/delete a folder from multipart rar file

    - by kikio
    Hello. I've a question: (I sent it in past) I have multipart rar file. Their contents are: file.part01.rar: myfolder (is a folder) data.cab -- file.part02.rar: myfolder (is a folder) data.cab <- file.part03.rar: myfolder (is a folder) data.cab <- file.part04.rar: difffolder (is a folder) anfolder (is a folder) data.cab <- file.part05.rar: myfolder (is a folder) data.cab <-- I want to extract it, so I right-click on "file.part01.rar" and select "Extract to ...". It extract 3 files, but in part 4, WinRAR said: "CRC. This file is currput." I think it problem is in the folders name in part04.rar. Is there anyway to rename folders in part04.rar? and cut "data.cab" from "afolder" to "difffolder". I really need it!! it is very emergency!!!!!!!! Thank you .....

    Read the article

  • Good set of web hosting permissions?

    - by Jorge Israel Peña
    Hey guys, I just got a linode and I'm in the process of configuring it. It's running nginx with php-fpm and passenger. nginx was compiled and is running as user nginx. php-fpm (php with fastcgi process manager) is running as www-data (in group www-data). My sites are currently in /var/www, so for example /var/www/test.com I'm just wondering what the general 'flow' of things is. So for example, /var/www is owned by root, should I chown of /var/www/test.com to nginx or www-data? Or should I put nginx in the www-data group? How should site uploading work, I just transfer files to the /var/www/test.com directory as root (sudo) and then chown -R www-data:www-data .? Thanks. I'm capable of figuring things out on my own, I'm just wondering what the typical/general way of handling users/groups/permissions/site-files is on linux with a webserver.

    Read the article

  • Is there still a place for tape storage?

    - by Jon Ericson
    We've backed up our data on LTO tapes for years and it's a real comfort to know we have everything on tape. A sister project and one of our data providers have both moved to 100% disk storage because the cost of disk has dropped so much. When we propose systems to potential customers these days we tend to downplay or not mention our use of tape systems for data storage since it might seem outdated. I feel more comfortable with having data saved in two separate formats: disks and tape. In addition, once data is securely written to tape, I feel (perhaps naively) that it's been permanently saved. Not having to rely on a RAID controller to be able to read back data is another plus for me. Do you see a place for tape backup these days?

    Read the article

  • Use Excel Table Column in ComboBox Input Range property

    - by V7L
    I asked this in StackOverflow and was redirected here. Apologies for redundancy. I have an Excel worksheet with a combo box on Sheet1 that is populated via its Input Range property from a Dynamic Named Range on Sheet2. It works fine and no VBA is required. My data on Sheet2 is actually in an Excel Table (all data is in the XLS file, no external data sources). For clarity, I wanted to use a structured table reference for the combo box's Input Range, but cannot seem to find a syntax that works, e.g. myTable[[#Data],[myColumn3]] I cannot find any indications that the combo box WILL accept structured table references, though I cannot see why it wouldn't. So, two part question: 1. Is is possible to use a table column reference in the combo box input range property (not using VBA) and 2. HOW?

    Read the article

  • How to have 2 windows machines on the same network with the same IP address

    - by Stu
    I have a custom made ADC device that is spitting out data by addressed UDP packets. I have that device plugged into a 4 port switch. I have one windows embedded standard 7 machine which is the normal recipient of that data. To be able to receive the data (Using LabVIEW) the windows network adapter IPv4 settings must have a static IP address that corresponds to the UDP packet destination. I would like to add a second windows machine (This one is just regular Win 7 Pro) to simultaneously catch the data, however with all devices connected to the switch, the Win 7 Pro machine recognizes an IP address conflict and will not take the setting for the required static IP address. (The network adaptor settings show that the correct value has been entered but ipconfig shows that it is not actually set.) Neither windows machine needs to transmit network data, they only need to be able to receive the UDP data from the ADC device. Is there any way to disable this IP address conflict detection 'feature' of windows networking?

    Read the article

  • Needing to concatenate between cells that change daily. I want to be able to automate this vs manual

    - by Harold
    I use concatenate to pull data together from different cells in my spreadsheet. Since my data changes daily, I want the formula to also change daily without having to manually input the new cell in the concatenate formula. I am looking for a way to do this but not sure how. Can anyone out there help me out please!? I appreciate the assistance in advance! Maybe this will help to explain what I need. I have a row of data from D4:AH4 that I insert daily based on the new day. When I use the concatenate and us the following formula: =CONCATENATE(TEXT('Raw Data'!B4,"m/d")," ",TEXT('Raw Data'!C4,"")," ",TEXT('Raw Data'!E4,"0.0%"))... E4 being the cell that changes daily where next day would be F4, G4, etc... All other parts of the formula will stay the same. I hope this helps! Thanks! :)

    Read the article

  • Can't read directory owned by my group

    - by Jonathan
    I moved the postgres data directory to a separate partition and it works great. The directory is owned by postgres user and postgres group. d-wx------ 11 postgres postgres 4.0K 2010-06-11 08:28 data/ I added myself to the group > sudo addgroup me postgres > groups me me : me adm dialout cdrom plugdev lpadmin admin sambashare postgres And gave the group read and execute permissions to everything in the directory. sudo chmod -R g+rx ./data d-wxr-x--- 11 postgres postgres 4.0K 2010-06-11 08:28 data/ But I still can not CD or LS the directory. > ls data ls: cannot open directory data: Permission denied What beginner mistake am I making?

    Read the article

  • Copy only remaining rows after filter to new Excel Workbook

    - by Joel Coehoorn
    I have an Excel file with an external data connection set up. It pulls data in directly from a database, and gives us about 450 rows. The header row allows us to filter the data in the sheet, and we use this as a general purpose tool... I will use the filters to narrow down what I'm looking at based on criteria that change depending on the circumstance. Often, after filtering the data, I want to send just the filtered records to another person. I'd like to copy/paste just the remaining rows into a new Workbook to send via e-mail. Unfortunately, this doesn't work. When I paste the data, it still pastes all the data. The filtered rows are still in the workbook... they're just hidden. I want them gone from the new file completely. How can I do this?

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • How to make a dropdown list such that... (see details)

    - by daysandtimes
    I want to plot the stock prices of certain companies VS the S&P500. I have all the price data downloaded in my excel sheet already. I want to create a line graph in Excel. One line is fixed and that would be the S&P500, and the other would be the company I select. I know how to use data validation to create a dropdown list, but how could I make it in such a way that when I select company A, I would only see company A's price data but not company B, C, etc. Then when I select company B, I would only see company B's price data & the S&P price data. And so on. The S&P line will be visible all the time, but the various company's price line will only appear if it is selected. Is there any easy way to normalize all sets of data in a way such that the starting point is always 100?

    Read the article

  • How can I keep persistent cookies from certain domains only?

    - by Mike L.
    This question is similar this one which covers Firefox, but I want to know how to do it in Chrome: I want Chrome to clear cookies from all sites accept those from certain domains. In the Cookies section of the *Content Settings I've made following selections: (*) Allow local data to be set (recommended) ( ) Allow local data to be set for the current session only ( ) Block sites from setting any data [ ] Block third-party cookies and site data [x] Clear cookies and other sites and plug-in data when I close my browser After logged in to my preferred website(s), I find the required domains listed when I click at All cookies and site data. Let's say, I find some cookies for mysite.comand www.mysite.com. Now I click at Manage exceptions and enter these items: Hostname Pattern Behavior ------------------------------------------- mysite.com Allow www.mysite.com Allow Unfortunately, this does not seem to work, because when I close Chrome and reopen it, all cookies are gone, even those from the configured mysite.com hosts.

    Read the article

  • C# powershell output reader iterator getting modified when pipeline closed and disposed.

    - by scope-creep
    Hello, I'm calling a powershell script from C#. The script is pretty small and is "gps;$host.SetShouldExit(9)", which list process, and then send back an exit code to be captured by the PSHost object. The problem I have is when the pipeline has been stopped and disposed, the output reader PSHost collection still seems to be written to, and is filling up. So when I try and copy it to my own output object, it craps out with a OutOfMemoryException when I try to iterate over it. Sometimes it will except with a Collection was modified message. Here is the code. private void ProcessAndExecuteBlock(ScriptBlock Block) { Collection<PSObject> PSCollection = new Collection<PSObject>(); Collection<Object> PSErrorCollection = new Collection<Object>(); Boolean Error = false; int ExitCode=0; //Send for exection. ExecuteScript(Block.Script); // Process the waithandles. while (PExecutor.PLine.PipelineStateInfo.State == PipelineState.Running) { // Wait for either error or data waithandle. switch (WaitHandle.WaitAny(PExecutor.Hand)) { // Data case 0: Collection<PSObject> data = PExecutor.PLine.Output.NonBlockingRead(); if (data.Count > 0) { for (int cnt = 0; cnt <= (data.Count-1); cnt++) { PSCollection.Add(data[cnt]); } } // Check to see if the pipeline has been closed. if (PExecutor.PLine.Output.EndOfPipeline) { // Bring back the exit code. ExitCode = RHost.ExitCode; } break; case 1: Collection<object> Errordata = PExecutor.PLine.Error.NonBlockingRead(); if (Errordata.Count > 0) { Error = true; for (int count = 0; count <= (Errordata.Count - 1); count++) { PSErrorCollection.Add(Errordata[count]); } } break; } } PExecutor.Stop(); // Create the Execution Return block ExecutionResults ER = new ExecutionResults(Block.RuleGuid,Block.SubRuleGuid, Block.MessageIdentfier); ER.ExitCode = ExitCode; // Add in the data results. lock (ReadSync) { if (PSCollection.Count > 0) { ER.DataAdd(PSCollection); } } // Add in the error data if any. if (Error) { if (PSErrorCollection.Count > 0) { ER.ErrorAdd(PSErrorCollection); } else { ER.InError = true; } } // We have finished, so enque the block back. EnQueueOutput(ER); } and this is the PipelineExecutor class which setups the pipeline for execution. public class PipelineExecutor { private Pipeline pipeline; private WaitHandle[] Handles; public Pipeline PLine { get { return pipeline; } } public WaitHandle[] Hand { get { return Handles; } } public PipelineExecutor(Runspace runSpace, string command) { pipeline = runSpace.CreatePipeline(command); Handles = new WaitHandle[2]; Handles[0] = pipeline.Output.WaitHandle; Handles[1] = pipeline.Error.WaitHandle; } public void Start() { if (pipeline.PipelineStateInfo.State == PipelineState.NotStarted) { pipeline.Input.Close(); pipeline.InvokeAsync(); } } public void Stop() { pipeline.StopAsync(); } } An this is the DataAdd method, where the exception arises. public void DataAdd(Collection<PSObject> Data) { foreach (PSObject Ps in Data) { Data.Add(Ps); } } I put a for loop around the Data.Add, and the Collection filled up with 600k+ so feels like the gps command is still running, but why. Any ideas. Thanks in advance.

    Read the article

  • Question about how to use strong typed dataset in N-tier application for .NET

    - by sb
    Hello All, I need some expert advice on strong typed data sets in ADO.NET that are generated by the Visual Studio. Here are the details. Thank you in advance. I want to write a N-tier application where Presentation layer is in C#/windows forms, Business Layer is a Web service and Data Access Layer is SQL db. So, I used Visual Studio 2005 for this and created 3 projects in a solution. project 1 is the Data access layer. In this I have used visual studio data set generator to create a strong typed data set and table adapter (to test I created this on the customers table in northwind). The data set is called NorthWindDataSet and the table inside is CustomersTable. project 2 has the web service which exposes only 1 method which is GetCustomersDataSet. This uses the project1 library's table adapter to fill the data set and return it to the caller. To be able to use the NorthWindDataSet and table adapter, I added a reference to the project 1. project 3 is a win forms app and this uses the web service as a reference and calls that service to get the data set. In the process of building this application, in the PL, I added a reference to the DataSet generated above in the project 1 and in form's load I call the web service and assign the received DataSet from the web service to this dataset. But I get the error: "Cannot implicitly convert type 'PL.WebServiceLayerReference.NorthwindDataSet' to 'BL.NorthwindDataSet' e:\My Documents\Visual Studio 2008\Projects\DataSetWebServiceExample\PL\Form1.cs". Both the data sets are same but because I added references from different locations, I am getting the above error I think. So, what I did was I added a reference to project 1 (which defines the data set) to project 3 (the UI) and used the web service to get the DataSet and assing to the right type, now when the project 3 (which has the web form) runs, I get the below runtime exception. "System.InvalidOperationException: There is an error in XML document (1, 5058). --- System.Xml.Schema.XmlSchemaException: Multiple definition of element 'http://tempuri.org/NorthwindDataSet.xsd:Customers' causes the content model to become ambiguous. A content model must be formed such that during validation of an element information item sequence, the particle contained directly, indirectly or implicitly therein with which to attempt to validate each item in the sequence in turn can be uniquely determined without examining the content or attributes of that item, and without any information about the items in the remainder of the sequence." I think this might be because of some cross referenceing errors. My question is, is there a way to use the visual studio generated DataSets in such a way that I can use the same DataSet in all layers (for reuse) but separate the Table Adapter logic to the Data Access Layer so that the front end is abstracted from all this by the web service? If I have hand write the code I loose the goodness the data set generator gives and also if there are columns added later I need to add it by hand etc so I want to use the visual studio wizard as much as possible. thanks for any help on this. sb

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >