Search Results

Search found 86 results on 4 pages for 'zac prunty'.

Page 1/4 | 1 2 3 4  | Next Page >

  • AJI Report #15–Zac Harlan Talks About Iowa Code Camp

    - by Jeff Julian
    We sit down with Zac Harlen and talk about Iowa Code Camp, what makes up a Code Camp, and how to start your own Code Camp. Zac has been a part of the leadership team for a few years for Iowa Code Camp and is the Development Manager for JP Cycles. We also get into what it takes to speak at a Code Camp if you are interested in growing beyond the user group as a speaker. Listen to the Show Site: LinkedIn Profile Blog: Zac Harlan Twitter: @ZacHarlan

    Read the article

  • RowFilter.regexFilter multiple columns

    - by twodayslate
    I am currently using the following to filter my JTable RowFilter.regexFilter( Pattern.compile(textField.getText(), Pattern.CASE_INSENSITIVE).toString(), columns ); How do I format my textField or filter so if I want to filter multiple columns I can do that. Right now I can filter multiple columns but my filter can only be of one of the columns An example might help my explanation better: Name Grade GPA Zac A 4.0 Zac F 1.0 Mike A 4.0 Dan C 2.0 The text field would contain Zac A or something similar and it would show the first Zac row if columns was int[]{0, 1}. Right now if I do the above I get nothing. The filter Zac works but I get both Zac's. A also works but I would then get Zac A 4.0 and Mike A 3.0. I hope I have explained my problem well. Please let me know if you do not understand.

    Read the article

  • why aren't interfaces [Serializable]?

    - by Zac Harlan
    I would think that adding that attribute to an interface would be helpful make sure you don't create classes that use the interface and forget to make them serializable. This could be a very fundamental question but wanted to ask the experts. Thanks in advance. Zac

    Read the article

  • VSFTPD does not allow upload with virtual users

    - by Mr. Squig
    I am attempting to setup VSFTPD with virtual users on a server running Ubuntu 12.04. I have configured the server to allow for virtual users to login, but I am having trouble getting it to allow uploads. My vsftpd.conf is as follows: listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 anon_upload_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES virtual_use_local_privs=YES guest_enable=YES guest_username=virtual user_sub_token=$USER local_root=/var/www/$USER hide_ids=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem /etc/pam.d/vsftpd contains: auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd crypt=hash account required pam_permit.so crypt=hash I have two virtual users set up, one of which has the same name as a local user. They each have a directory in /var/www/ owned by 'virtual'. As I understand it, when a virtual user logs in this way they will appear to the system as the user virtual. Using this configuration user can log on, but cannot upload files. The error given in /var/log/vsftpd.log is: Tue Nov 20 19:49:00 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:07 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 2] CONNECT: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 1] [zac] OK LOGIN: Client "96.233.116.53" Tue Nov 20 19:49:11 2012 [pid 3] [zac] FAIL CHMOD: Client "96.233.116.53", "/test.ppm 644" I have tried changing the permissions of these directories in all sorts of ways, but nothing seem to work. I have a feeling that it is something simple related to permissions. Any ideas?

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • How do I reset a countdown timer when it reaches 0? (Xcode 4)

    - by Zac Prunty
    first question and I have researched the heck out of this and can't seem to find any answers. I would like to implement a countdown timer in my app that starts at 30 seconds and resets when it hits 0. It will count back by 1 second intervals. I have played around with the resources that Apple provides and I can't seem to implement the NStimer to do as I just explained. Are there any suggestions on how I could code this? I would eventually like to link this to a button that will start the timer when it is pressed. I appreciate any advice on this topic!

    Read the article

  • Learning C, C++ and C#

    - by Zac
    I'm sure you guys are tired of this question but after wading through hours of similar posts and questions I've really not made any progress to my specific concerns. I was hoping you guys could shed some light on a couple of questions I have before I decide on a course of action. BACKGROUND: I'm wanting to enroll in some type of program to learn a programming language/get a certificate/degree to work in the field. I've always been interested and bought a book on VB back in high school and dabbled. Now I want to get serious after a huge hiatus. Question 1: I've read it's counter-productive to learn C first, then C++ or C# because you develop bad habits. In a lot of college courses I've looked at, learning C/C++ is mandatory to advance. Should I ever bother learning C? On a related note, I really don't understand the difference between C and C++, or C# for the matter other than it incorporates .NET (which, I understand, is a compilation of tools and libraries that make programming easier and faster). Question 2: Where did you guys learn to program? Where do you recommend? Is it possible to land a job programming being self-taught? Is my best chance an ITT tech or a regular college? I was going to enroll in a JC and go from there but I can't decide what to do. LAST question :) I heard C++ is being "ported" to .NET. True? And if so, is this going to make C++ a solid, in-demand language to learn? Thanks for looking. :)

    Read the article

  • Setting up apache to view https pages

    - by zac
    I am trying to set up a site using vmware workstation, ubuntu 11.10, and apache2. The site works fine but now the https pages are not showing up. For example if I try to go to https://www.mysite.com/checkout I just see the message Not Found The requested URL /checkout/ was not found on this server. I dont really know what I am doing and have tried a lot of things to get the ssl certificates in there right. A few things I have in there, in my httpd.conf I just have : ServerName localhost In my ports.conf I have : NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 http </IfModule> <IfModule mod_gnutls.c> Listen 443 http </IfModule> In the /etc/apache2/sites-available/default-ssl : <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> .... truncated in the sites-available/default I have : <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #DocumentRoot /home/magento/site/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <virtualhost *:443> SSLEngine on SSLCertificateFile /etc/apache2/ssl/server.crt SSLCertificateKeyFile /etc/apache2/ssl/server.key ServerAdmin webmaster@localhost <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> #<Directory /home/magento/site/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </virtualhost> I also have in sites-availabe a file setup for my site url, www.mysite.com so in /etc/apache2/sites-available/mysite.com <VirtualHost *:80> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> <VirtualHost *:443> ServerName mysite.com DocumentRoot /home/magento/mysite.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/magento/mysite.com/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog /home/magento/logs/apache.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn </VirtualHost> Thanks for any help getting this setup! As is probably obvious from this post I am pretty lost at this point.

    Read the article

  • I2C_SLAVE ioctl in i2c linux driver

    - by zac
    I am supposed to write a simple write and read program for i2c but the problem is that i dont have the device at hand presently to test it so i need my code to be perfect. I am confused about the function of the I2C_SLAVE ioctl.From what i read,this ioctl is used to set the slave address. But we pass the slave address again when performing read/write using ioctl I2C_RDWR via addr in the structure i2c_msg. So then,what is the function of I2C_SLAVE command?Do i need to call it every time i perform a read or write operation? Thank you in advance.

    Read the article

  • ISAPI filter with LDAP over SSL only works as administrator

    - by Zac
    I have created an ISAPI filter for IIS 6.0 that tries to authenticate against Active directory using LDAP. The filter works fine when authenticating regularly over port 389, but when I try to use SSL, I always get the 0x51 Server Down error at the ldap_connect() call. Even skipping the connect call and using ldap_simple_bind_s() results in the same error. The weird thing is that if I change the app pool identity to the local admin account, then the filter works fine and LDAP over SSL is successful. I created an exe with the same code below and ran it on the server as admin and it works. Using the default NETWORK SERVICE identity for the site's app pool is what seems to be the problem. Any thoughts as to what is happening? I want to use the default identity since I don't want the website to have elevated admin privileges. The server is in a DMZ outside the network and domain where our DCs are that run AD. We have a valid certificate on our DCs for AD as well. Code: // Initialize LDAP connection LDAP * ldap = ldap_sslinit(servers, LDAP_SSL_PORT, 1); ULONG version = LDAP_VERSION3; if (ldap == NULL) { strcpy(error_msg, ldap_err2string(LdapGetLastError())); valid_user = false; } else { // Set LDAP options ldap_set_option(ldap, LDAP_OPT_PROTOCOL_VERSION, (void *) &version); ldap_set_option(ldap, LDAP_OPT_SSL, LDAP_OPT_ON); // Make the connection ldap_response = ldap_connect(ldap, NULL); // <-- Error occurs here! // Bind and continue... } UPDATE: I created a new user without admin privileges and ran the test exe as the new user and I got the same Server Down error. I added the user to the Administrators group and got the same error as well. The only user that seems to work with LDAP over SSL authentication on this particular server is administrator. The web server with the ISAPI filter (and where I've been running the test exe) is running Windows Server 2003. The DCs with AD on them are running 2008 R2. Also worth mentioning, we have a WordPress site on the same server that authenticates against LDAP over SSL using PHP (OpenLDAP) and there's no problem there. I have an ldap.conf file that specifies TLS_REQCERT never and the user running the PHP code is IUSR.

    Read the article

  • Does the Intel DX79TO motherboard support x8 devices (SAS HBAs) on PCIe x16 slots?

    - by Zac B
    Context: I have an Intel DX79TO motherboard and a Sun SAS3081E-S/LSI 1060E-S HBA card with a PCIe x8 interface. I plug the HBA into my mobo next to my graphics card, and the HBA power lights illuminate, but the BIOS and OSes (tried Linux, ESXi, Win7) don't see the HBA at all. Question: Does the DX79TO motherboard support non-x16/non-GPU devices in its PCIe x16 slots? According to this question, some consumer motherboards don't support this, but I can't figure out whether or not this motherboard/family does. The answer will affect whether I buy a new motherboard or RMA the SAS card, with money attached to each course, so I figured I'd ask here first. What I've Tried: I've read the spec/manuals for the motherboard and the HBA, and I didn't see anything regarding whether or not the x16 slots were back-compatible to lower lane widths/non graphics-card devices, or whether or not the card could run in wider slots than x8. I've tried contacting Intel, but that was over a month ago and I haven't yet heard anything back except an automated "we got your email!" message.

    Read the article

  • Uploading files to https mediawiki

    - by zac
    I am unable to upload files (images) to my mediawiki install. I think it may have something to do with it is being hosted as http secure(HTTPS). I followed carefully the instructions here. I updated write permissions to the /images/ dir drwxrwxrwx 2 apache apache 4096 Apr 13 19:04 images php.ini file_uploads = On In LocalSettings.php $wgEnableUploads = true; $wgFileExtensions = array('png','jpg','jpeg','gif'); When I try to upload it quickly refreshes the page without any errors or any indication anything went wrong, other than how fast it refreshed. When I navigate to the history of uploads it is empty. How can I troubleshoot this. Is it related to the secure http?

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • How to connect a VM running on an ESXi host to that host via a VMKernel NIC?

    - by Zac B
    Say I have an ESXi (5.0) host that runs a Linux distribution which hosts iSCSI targets, which contain the images for other VMs which the host will run. When it's used, I'll start the host first, then the iSCSI server, and then refresh all storage targets/HBAs in order to see the provided shares as online. I know it's a strange puzzle-box solution, but I was told to implement it. The ESXi host itself has a gigabit NIC which connects to the outside world. The guest OS (CentOS) supports VMXNet3, however, and if I can, I'd like to use its VMXNET3 NIC to host iSCSI for the ESXi host. How should I go about doing this? I went to create a new virtual network, and selected "VKernel", as it suggested that I use that type of network for SAN traffic, but it is apparently not set up for "self-hosted" SAN hosts, as the new network did not appear as an option to attach the CentOS box's VMXNET3 NIC to. How should I best connect an iSCSI host out to its "parent" ESXi host, if I need a) a 10gb connection, and (optionally) b) a VMKernel network for it?

    Read the article

  • save workspace environment in windows?

    - by zac
    Is there a way in windows to save the layout of the workspace or is there other software that will allow me to have multiple programs start up and open ino windows at fixed resolutions / position on my displays? I envision something like how Photoshop works where you can save the position and type of menus that are open as different workspaces.

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • CentOS disable filesystem check: superblock last mount time is in the future

    - by Zac B
    I'm persistently getting the "Superblock last mount time is in the future" error when booting CentOS 6. I've seen other questions which ask how to resolve this error, but I know exactly why it's occurring: our development/testing VMs regularly have their date set to times far from the present, and have all of their filesystems remounted. What I want to know is: how do I disable all consistency checking for superblock mount time in centOS? I've tried tune2fs -i 0 <device> and setting buggy_init_scripts=1 in /etc/e2fsck.conf and neither has worked; the problem persists.

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • Understanding the Linux Root

    - by Zac
    I've been using Linux (Ubuntu) for about 2 weeks now and am still struggling with some basic concept surrounding the root user: (1) Some terminal operations (such as making subdirectories inside a FHS directory such as /opt) require me to prefix the command with sudo - why? I guess what I'm choking on is: if I'm already logged in as a valid system user, why do I have to be a superuser/root in order to modify things that the sysadmin has already deemed me worthy of accessing? (2) Is there a GUI (Gnome, KDE) equivalent to sudo? Is there a way to assume a superuser role through a graphical context, rather than from inside a new shell? (3) I can't access the /root directory logged in as myself... but I installed the system to begin with and was never asked to create a root account! How do I log in as root and gain access to /root?!? Thanks for all feedback & input!

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • Is there a way to determine the original size or file count of a 7-zip archive?

    - by Zac B
    I know that when I compress an archive with the 7za utility, it gives me stats like the number of files processed and the amount of bytes processed (the original size of the data). Is it possible, using the commandline (on linux) or some programming language, to determine: the original size of an archive, before it was compressed? the number of files/directories contained within an archive? The answer might be "no, just decompress the whole archive and do counting/sizing then", but it would be useful to know if there was a faster/less space-greedy way.

    Read the article

  • GUID Partition Table & Linux

    - by Zac
    (1) Is it true that the new GUID Partition Table scheme allows a user to partition a drive however he/she like, outside of the traditional MBR "4 primaries or 3 primaries + 1 extension" paradigm? If so, are there any limitations to the GPT? If my assumption is wrong, what are its advantages over the MBR model? (2) I'm getting a new laptop this week and will be installing Ubuntu (and, more generally, Linux) for the first time ever. Does Ubunutu come pre-configured with MBR as a default? If so, how do I get Ubuntu w/ GPT? If not, how do I specify GPT over MBR? Thanks!

    Read the article

1 2 3 4  | Next Page >