Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 764/2416 | < Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • Enrich a dataset of POIs with OpenStreetMap

    - by zero
    update: due to some hints of users - eg oliver Salzburg and slhck i have been aware of gis.stackexchange.com - so i moved the topic on my own: Plz can you or somebody who has the permission close the article - since we do not need this topic on two sites. Thx for your work. KEEP up the service here! STACK-sites rock. I have a list of POIs, some with a full description and some with only a few data entries, like the following: 6.9441000 50.9242000 [50677] (Ital) Casa di Biase [Köln] 6.9373600 50.9291800 [50674] (Ital) Al Setaccio [Köln] However, I need the full dataset. Can I get this somewhere? If I have all the position data, is it possible to find the rest? a. name of the street b. name of the town So for example, the data should finally look like this: 10.5346100 52.1613600 [38300] (Chin) Wanbao Kommissstr.9 [Wolfenbüttel] 13.2832500 52.4422600 [14167] (Ital) LaPergola Unter den Eichen 84d [Berlin] 13.3177700 52.5062900 [10625] (Chin) Good Friends Kantstr.30 [Berlin] Can I do this with OpenStreetMap? Should I parse OpenStreetMap data? Or OpenBabel?

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • "Enter/Return" Key with Word Mobile on Windows Mobile

    - by Maarx
    Some of our employees are using PDAs running Windows Mobile. I wish I could provide more data regarding versions, but frankly these things aren't my jurisdiction. Someone's simply come to me looking for what they thought would be a quick fix. They're using Word Mobile and the barcode scanner to record large volumes of data. The scanner's default action is to insert the scanned text exactly as if it had been input with the keyboard, and puts a newline at the end. That's great, because it's exactly what we need it to do: separate data with newlines. The issue comes when system can't read the barcode, and the employee has to type in the data by hand. They've discovered a very peculiar quirk of Mobile applications: pressing the hardware Enter/Return key on the keyboard appears to save and exit the application. How do we change this behavior? They've realized that using the stylus to "click" the virtual on-screen keyboard's Enter/Return key will add the necessary newline, but it's a huge inconvenience for them. How do I fix the default behavior of the Enter/Return key for Word Mobile to instead insert a newline?

    Read the article

  • Cannot Access Local Network Shares (Strange Schannel and lsass.exe issues)

    - by Fake
    When I browse to my own computer's shares by going to \\MYCOMPUTERNAME\ ; I cannot access any of the shares on my LOCAL machine (nor can I access them remotely) and it generates about 40 of the following errors in my system event log: The following fatal alert was generated: 10. The internal error state is 1203. Details: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Schannel" Guid="{1F678132-5938-4686-9FDC-C8FF68F15C85}" /> <EventID>36888</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2011-04-05T13:52:09.144278900Z" /> <EventRecordID>79628</EventRecordID> <Correlation /> <Execution ProcessID="552" ThreadID="672" /> <Channel>System</Channel> <Computer>DEVELOP4.CONTOSO.COM</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="AlertDesc">10</Data> <Data Name="ErrorState">1203</Data> </EventData> </Event> Additonal information: The process that is generating the error is lsass.exe OS: Windows7 Professional x64 Joined to Domain: Yes I was able to access the shares locally in the past I am having the same issue on 3 other computers that have similar configurations Any help would be greatly appreciated, because I have no idea what's wrong. Thanks!

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • DVD Share on Vista Home Premium Failing

    - by hpunyon
    UPDATE: - I can't find any Local Policy Editor for Vista Home Premium, as suggested. - I did learn about registry keys: allocatecdroms, allocatefloppies, allocatedasd and tried adding these keys (individually and collectively) and setting them to both 0 or 1. There was no positive affect on read access to the DVD root folder - always Access Denied. ORIGINAL POST: Failing read access to the root folder of a DVD drive in Vista Home Premium laptop using the Guest account - Access Denied. The client is an XP Home PC that can see, but not access, the data in the share. I'm only trying to read the data DVD - not trying to write/burn anything. On the Vista laptop, I have: All Firewalls and Antivirus disabled.UAC disabled. Password checking disabled. "Advanced Shared" the DVD drive, with "Everyone" having full-access permissions to the share. Tried adding Guest and Anonymous users having full-access permissions to the share. RestrictAnonymous=0 set in the registry. Both PC's are in the same workgroup (MSHOME) The XP Home client sees the shared DVD in \Vista_Hostname\ but when I double click the drive icon on the client, I get a popup that access is denied, check with the administrator, etc. I can share other folders on the Vista PC and see and READ these from the XP Home client. If I enable password checking on the Vista side, I get a user/password popup, and I can authenticate (using my known Vista account, that happens to have Admin rights) and then I can get to see and read the DVD data. I need to open this up so that the (default) Guest user can see and access the DVD data files.

    Read the article

  • Solaris Mysql Failure and Unable to Restart

    - by Iscariot
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' when typing mysql as root it says mysql: not found. The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it.

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • How does Tunlr work?

    - by gravyface
    For those of you not in the US, Tunlr uses DNS witchcraft to allow you to access US-only (and UK-only stuff like BBC radio online) services and Websites like Hulu.com, etc. without using traditional methods like a VPN or Web proxy. From their FAQ: Tunlr does not provide a virtual private network (VPN). Tunlr is a DNS (domain name system) unblocking service. We’re using sophisticated technologies (a.k.a. the Tunlr Secret Sauce ©) to re-adress certain data envelopes, tricking the receiver into thinking the envelope originated from within the U.S. For these data envelopes, Tunlr is transparently creating a network tunnel from your location to our U.S.-based servers. Any data that’s not directly related to the video or music content providers which Tunlr supports is not only left untouched, it’s also not even routed through Tunlr. In order to use Tunlr, you will have to change the DNS address. See Get started for more information. I can't really wrap my head around how this works; I have always assumed that these services performed a geolocation lookup via your client IP. Just really curious as to how this works. EDIT 2 I believe they're only proxying the initial geo check and then modifying the data stream request to include your real IP address so that the streaming is direct, not proxied.

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • How do I fix error 1303 during TI Connect install?

    - by smoth190
    I recently purchased a TI-84 Plus graphing calculator, and I'm trying install the TI Connect software in order to connect the calculator to my computer via the USB cable. Unfortunately, I'm getting this error while trying to install the program: Error 1303. The installation has insufficient privileges to access this directory: E:\Data\Timothy\Documents\MyTIData. The installation cannot continue. Log on as administrator or contact your system administrator. However, my account is the only account on my PC, and it has administrative privileges. I've also tried running the installer with Run as Administrator, but with no luck. If I create the folder MyTIData manually, I receive this error: Error 1317. An error occurred while attempting to create the directory: E:\Data\Timothy\Documents\MyTIData I've reapplied the security settings to the E:\Data folder (and all its sub-directories) to Full for my account. I've also gone into Computer Management, and given SYSTEM full privileges for the entire disk. I've also logged out, logged back in, restarted, etc. but still, no luck. Now, I should mention that my Documents folder is not at the default location. I changed it due to my C: disk being a 90GB SSD, so I moved all my personal data onto the extra storage disk (which is ~1TB). I don't know if that is causing the issue, but it can't hurt throwing it out there. So why can't I install this program? Google'ing the problem brings up this error for various other installers (such as Visual Studio and Microsoft Office), but nothing for TI Connect. All the solutions are the same: Give the folder Full privileges...but I've already done this! I've also tried running the installer with and without the calculator plugged in, but it didn't change anything. In the prompt that contains the error, repeatedly clicking Retry or waiting a few moments before clicking Retry also produces no result.

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • terminal-window viewer for tab-delimited files in *nix?

    - by khedron
    I work with a lot of tab-delimited data files, with varying columns of uncertain length. Typically, the way people view these files is to bring them down from the server to their Windows or Mac machine, and then open them up in Excel. This is certainly fully-featured, allowing filtering and other nice options. But sometimes, you just want to look at something quickly on the command line. I wrote a bare-bones utility to display the first<n>lines of a file like so: --- line 1 --- 1:{header-1} 2:{header-2} 3:... --- line 2 --- 1:{data-1} 2:{data-2} 3:... This is, obviously, very lame, but it's enough to pipe through grep, or figure out which header columns to use "cut -f" on. Is there a *nix-based viewer for a terminal session which will display rows and columns of a tab-delimited file and let you move the viewing window over the file, or otherwise look at data? I don't want to write this myself; instead, I'd just make a reformatter which would replace tabs with spaces for padding so I could open the file up in emacs and see aligned columns. But if there's already a tool out there to do something like this, that'd be great! (Or, I could just live with Excel.)

    Read the article

  • which grabber is good enough to get 1000fps?

    - by user261002
    I have two framegrabber with a fast camera (1800+ fps). can anybody who understand the hardware, explain to me which of the following grabbers can help me more to grab 1000fps ? here are the the features of the two grabbers : Inspecta-5 Full Camera Link® Version: · Support for line scan and area cameras. · Video data rate of up to 660 Mbytes/sec. · PCI – X bus interface for 64 Bit data width and 66 MHz clock frequency. · PCI bus interface for 32 Bit data width and 33 MHz clock frequency. · 2 Gigabyte Onboard Memory for fast video streams. · Four opt coupled input- output ports for external trigger and encoder signals. · 528 Mbytes/sec. maximum data rate on the PCI–X Bus. · SDK for Windows 2000/XP SILICONSOFTWARE V-Series Camera Link : “microEnable IV VD4-CL” · Camera Pixel Clock Support 85 MHz · Area Scan Cameras 32k * 64k max. image size · Line Scan Cameras 64k max. image width · Acquisition Buffer: 512 MB DDR-RAM · Sustainable Transfer Rate (max.) 850 MBytes/sec. · microEnable SDK for Windows XP/Vista/ 7/ Linux

    Read the article

  • CMD/ADB - Autorun script to search, copy, and paste a file from android system to flash drive

    - by Outride
    I've looked around and can't find anything that answers my question. This is my first question, so any tips or thoughts are welcome, as well as an answer :p As explained in title, i want to create a script that launches, finds a file on android phone, copies it, and pastes it to a flash drive. As of right now, it's a mix of multiple tutorials, trial and error, and I'm at a point of giving up. As of right now, I have a flash drive, loaded with three scripts. As follows: Bold = name of file file.bat @echo off :: variables /min SET odrive=%odrive:~0,2% set backupcmd=xcopy /s /c /d /e /h /i /r /y echo off %backupcmd% "C:\Users\Outride\Desktop\kikDatabase.db" "%drive%\all" @echo off cls invisible.vbs CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False launch.bat wscript.exe \invisible.vbs file.bat So far, I had to use android commander, manually go through the directory, find /data/data/kik.android/databases and then copy kikDatabase.db to my desktop. Then run this scrip. Yes i'm trying to pull the database to copy all my email contacts. I use launch.bat, which then makes file.bat invisible due to the invisible.vbs script. What would i need to do now to have the file searched for and copied to the flashdrive? Thanks in advance, i'll be glad to answer any questions if theres any :p just remember that i'm not exactly a tech expert haha EDIT* Cleared junk of prior edits. New - I now have a .bat script to recognize what drive the usb is on, and launch py_cmd (adb shell) This is the current script. pull.bat @echo off :: variables SET odrive=%odrive:~0,2% set launching=start "%drive%\Minimal ADB and Fastboot\py_cmd" echo off %launching% so how could I make it for the .bat or a new script, to type the following "adb pull /data/data/kik.android/databases/ %drive%\All\Database" into the adb terminal? please help! I've been racking my brain over this all night :3

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • What is the correct authentication mechanism when there are users inside and outside the domain?

    - by Gary Barrett
    We have a Windows 7 enterprise desktop data entry app for mobile (laptop) users with local SQL Express 2008 R2 Express db that syncs data with an SQL Server 2008 R2 Server db. Authentication is required before syncing the data. The existing group of users are part of the organisation's domain so normal scenario and they connect to the Sql Server directly. But there are plans for a second group of app users who belong to various partner organisations so they are outside our domain and have their own various separate domains/accounts. The aim is to deploy the desktop app to them and they will periodically sync data to our SQL Server. What I am uncertain of: Is it possible to authenticate users from another domain? Can permissions be managed via Active Directory etc? Which authentication protocol should be used in this scenario? Windows, Forms, SQL, etc? The IT people are requesting users of the system be managed via Active Directory. Is it possible to manage the external domain users access via Active Directory?

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

< Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >