Search Results

Search found 3459 results on 139 pages for 'if modified since'.

Page 11/139 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Smart auto-completition for staged git file names, used with difftool

    - by piobyz
    I'd like to have a smart auto-completition of currently staged file names when using git diff. Example: modified: DIR1/LongCamelCaseFileName.h modified: DIR1/AnotherLongCamelCaseFileName.m modified: DIR1/AndThereAreALotOfThemInDir1.m modified: DIR2/file4.m and here, using bash tab-auto-complete functionality I'd like to use it with git diff where by smart I mean that after typing git diff I'd need to type only a short part of the staged file name that I want to diff, and without a dirname, so for example git diff And<TAB> would result in git diff AndThereAreALotOfThemInDir1.m Actually, without a dir-ommiting-part it would be still useful (auto-completing using only staged files pool).

    Read the article

  • Copying 2D lists in python

    - by SuperString
    Hi I want to copy a 2D list, so that if I modify 1 list, the other is not modified. For 1 D list, I just do this: a = [1,2] b = a[:] And now if I modify b, a is not modified. But this doesn't work for 2D list: a = [[1,2],[3,4]] b = a[:] If I modify b, a gets modified as well. How do I fix this?

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • Select top/latest 10 in couchdb?

    - by drozzy
    How would I execute a query equivalent to "select top 10" in couch db? For example I have a "schema" like so: title body modified and I want to select the last 10 modified documents. As an added bonus if anyone can come up with a way to do the same only per category. So for: title category body modified return a list of latest 10 documents in each category. I am just wandering if such a query is possible in couchdb.

    Read the article

  • Encouter error "Linux ip -6 addr add failed" while setting up OpenVPN client

    - by Mickel
    I am trying to set up my router to use OpenVPN and have gotten quite far (I think), but something seems to be missing and I am not sure what. Here is my configuration for the client: client dev tun proto udp remote ovpn.azirevpn.net 1194 remote-random resolv-retry infinite auth-user-pass /tmp/password.txt nobind persist-key persist-tun ca /tmp/AzireVPN.ca.crt remote-cert-tls server reneg-sec 0 verb 3 OpenVPN client log: Nov 8 15:45:13 rc_service: httpd 15776:notify_rc start_vpnclient1 Nov 8 15:45:14 openvpn[27196]: OpenVPN 2.3.2 arm-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [MH] [IPv6] built on Nov 1 2013 Nov 8 15:45:14 openvpn[27196]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Nov 8 15:45:14 openvpn[27196]: Socket Buffers: R=[116736->131072] S=[116736->131072] Nov 8 15:45:14 openvpn[27202]: UDPv4 link local: [undef] Nov 8 15:45:14 openvpn[27202]: UDPv4 link remote: [AF_INET]178.132.75.14:1194 Nov 8 15:45:14 openvpn[27202]: TLS: Initial packet from [AF_INET]178.132.75.14:1194, sid=44d80db5 8b36adf9 Nov 8 15:45:14 openvpn[27202]: WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=1, C=RU, ST=Moscow, L=Moscow, O=Azire Networks, OU=VPN, CN=Azire Networks, name=Azire Networks, [email protected] Nov 8 15:45:14 openvpn[27202]: Validating certificate key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has key usage 00a0, expects 00a0 Nov 8 15:45:14 openvpn[27202]: VERIFY KU OK Nov 8 15:45:14 openvpn[27202]: Validating certificate extended key usage Nov 8 15:45:14 openvpn[27202]: ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Nov 8 15:45:14 openvpn[27202]: VERIFY EKU OK Nov 8 15:45:14 openvpn[27202]: VERIFY OK: depth=0, C=RU, ST=Moscow, L=Moscow, O=AzireVPN, OU=VPN, CN=ovpn, name=ovpn, [email protected] Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Nov 8 15:45:15 openvpn[27202]: Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Nov 8 15:45:15 openvpn[27202]: Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA Nov 8 15:45:15 openvpn[27202]: [ovpn] Peer Connection Initiated with [AF_INET]178.132.75.14:1194 Nov 8 15:45:17 openvpn[27202]: SENT CONTROL [ovpn]: 'PUSH_REQUEST' (status=1) Nov 8 15:45:17 openvpn[27202]: PUSH: Received control message: 'PUSH_REPLY,ifconfig-ipv6 2a03:8600:1001:4010::101f/64 2a03:8600:1001:4010::1,route-ipv6 2000::/3 2A03:8600:1001:4010::1,redirect-gateway def1 bypass-dhcp,dhcp-option DNS 194.1.247.30,tun-ipv6,route-gateway 178.132.77.1,topology subnet,ping 3,ping-restart 15,ifconfig 178.132.77.33 255.255.255.192' Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: timers and/or timeouts modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ifconfig/up options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: route-related options modified Nov 8 15:45:17 openvpn[27202]: OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Nov 8 15:45:17 openvpn[27202]: TUN/TAP device tun0 opened Nov 8 15:45:17 openvpn[27202]: TUN/TAP TX queue length set to 100 Nov 8 15:45:17 openvpn[27202]: do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1 Nov 8 15:45:17 openvpn[27202]: /usr/sbin/ip link set dev tun0 up mtu 1500 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip addr add dev tun0 178.132.77.33/26 broadcast 178.132.77.63 Nov 8 15:45:18 openvpn[27202]: /usr/sbin/ip -6 addr add 2a03:8600:1001:4010::101f/64 dev tun0 Nov 8 15:45:18 openvpn[27202]: Linux ip -6 addr add failed: external program exited with error status: 254 Nov 8 15:45:18 openvpn[27202]: Exiting due to fatal error Any ideas are most welcome!

    Read the article

  • Opensource, noncommercial License ?

    - by nick-russler
    Hey, i want to publish my software under a opensource license with the following conditions: you are allowed to: Share — to copy, distribute and transmit the work use a modified version of the code in your application you are not allowed to: publish modified versions of the code use the code in anything commercial is there a software license out there that fits my needs ? (crosspost: http://stackoverflow.com/questions/4558546/opensource-noncommercial-license)

    Read the article

  • Mail queue directory stuck in IIS SMTP server

    - by Loftx
    Hi there, We have an IIS SMTP server which sends out a largish number of mails (4000 or so) in batches overnight, and recently we've seen mails get "stuck" in the queue directory. Normally restarting the SMTP service seems to fix this, but it's happened a few times so I'm looking for more information. We sent out around 12,000 emails last night in 3 batches of roughly 4000. Around 10 hours later there are still 2000 or so in the queue directory which don't seem to be leaving the queue. Any new mails which appear in the queue are picked up almost immediately and sent to their destination, but these 2000 or so don't seem to move. Looking at the date modified on the emails some match up with the time they were sent, but around 1000 of them have modified dates stretching up to now. e.g. there was one mail with a date in the message headers of 5:30 this morning, but it's date modified is 11:50 and there are 3 other messages with a date modified of 11:50, then 5 with 11:49, 2 with 11:45 stretching back for a few hours and all with actual message headers far earlier. The logs for the server look like this 11:54:52 127.0.0.1 EHLO - 250 11:54:52 127.0.0.1 MAIL - 250 11:54:52 127.0.0.1 RCPT - 250 11:54:52 127.0.0.1 DATA - 250 11:54:52 127.0.0.1 QUIT - 240 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 EHLO - 0 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 MAIL - 0 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 RCPT - 0 11:54:53 85.115.62.190 - - 0 11:54:53 85.115.62.190 DATA - 0 11:54:53 85.115.62.190 - - 0 11:54:54 85.115.62.190 - - 0 11:54:54 85.115.62.190 QUIT - 0 11:54:54 85.115.62.190 - - 0 All codes are either 250 or 240 or 0. I believe 250 and 240 indicate success, but I don't know what all the 0s are. Could someone with more experience of mail server troubleshooting give me a hand or tell me what to try next. Thanks, Tom

    Read the article

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • php file downloads instead of being processed with ajax on apache

    - by eagleon
    I have a small website where some content is displayed within a HTML tag using AJAX. The content is simply taken from another page on the same web site. However, sometimes instead of loading the parsed PHP file, the browser displays a download box instead. I downloaded the file and this is what it looks like a text file mixed with binary or gzipped data. I can't paste the binary stuff here, but here are some of the headers: Jul 2012 18:52:16 GMT Server: Apache/2 X-Powered-By: PHP/5.3.10 Content-Encoding: gzip Vary: Accept-Encoding,User-Agent Keep-Alive: timeout=1, max=95 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=93 ETag: "2fc857-409-4c39691c59b40" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=92 ETag: "2fc854-3e5-4c39691b65900" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=91 ETag: "2fc847-3e3-4c3969197d480" and large blocks of stuff like this: µàl]&BaËÜk#ìÏ

    Read the article

  • nginx status code 200 and 304

    - by Chamnap
    I'm using nginx + passenger. I'm trying to understand the nginx response 200 and 304. What does this both means? Sometimes, it responses back in 304 and others only 200. Reading the YUI blog, it seems browser needs the header "Last-Modified" to verify with the server. I'm wondering why the browser need to verify the last modified date. Here is my nginx configuration: location / { root /var/www/placexpert/public; # <--- be sure to point to 'public'! passenger_enabled on; rack_env development; passenger_use_global_queue on; if ($request_filename ~* ^.+\.(jpg|jpeg|gif|png|ico|css|js|swf)$) { expires max; break; } } How would I add the header "Last-Modified" to the static files? Which value should I set?

    Read the article

  • make "sort by" permanent for specific folder

    - by Black Cobra
    I have a folder where I store all my web icons in .png and .ico format. I set view to large icons and sort by to date modified -> descending. When I save new icons in this folder from web, I drag and drop them from browser (chrome) to the folder. The problem is after that the icons in the folder are not sorted by date modified descending any more... So how can I set any file or program or something that will set the sort by to date modified -> descending for that folder only each time I will access (open) the folder? I am using window 7 x64.

    Read the article

  • Only show changed files while syncing from ext4 to NTFS

    - by qox
    I would like rsync to print modified and deleted files. The verbose option (-v) does print modified files but also the list of subdirectories, maybe because touched directories are considered modified. Since I sync a lot of files from a lot of subdirectories, it's impossible to see the actual changes. So, is there a way to not print directories using rsync ? Im not looking for grep -v "*/$" kind of answers since it would also exclude new directories. Command I am using: rsync -avh --delete /media/data/src /media/data/bkp And everytime it prints the list of all directories: src/dir1/ src/dir1/sdir1/ src/dir1/sdir2/ src/dir2/ ..... Thanks for your help. EDIT: Ok, after some intensive tests .. It doesn't print all directories when syncing from an ext4 partition to an ext4 and from NTFS to NTFS. It only does when syncing from ext4 to NTFS .. And options '-c' or '--omit-dir-times' don't change that.

    Read the article

  • Boot.ini on Windows Server 2003 R2

    - by Jason H.
    I have a Windows Server 2003 R2 with 48 GB of RAM; server has been running strong for quite some time. Recently our boot.ini was modified causing issues, most likely by our remote administrators. Now the server is only showing 14 GB of RAM. This has caused major performance issues for our end users. Our remote administrators have stated "we don't change the boot.ini settings(switches)". However, I know for a fact that all of the local administrators have not modified the switches (due to lack of permissions). The real question.. Is it possible to "audit" who has modified the boot.ini? If thats not possible, can the boot.ini be set via startup? Any help would be greatly appreciated. This is an ongoing issue that I would love to resolve.

    Read the article

  • Only show changed files with verbose option

    - by qox
    I would like rsync to print modified and deleted files. The verbose option (-v) does print modified files but also the list of subdirectories, maybe because touched directories are considered modified. Since I sync a lot of files from a lot of subdirectories, it's impossible to see the actual changes. So, is there a way to not print directories using rsync ? Im not looking for grep -v "*/$" kind of answers since it would also exclude new directories. Command I am using: rsync -avh --delete /media/data/src /media/data/bkp And everytime it prints the list of all directories: src/dir1/ src/dir1/sdir1/ src/dir1/sdir2/ src/dir2/ ..... Thanks for your help.

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • How to force browsers to always reload xslt files?

    - by bitmask
    Related: Apache: How can I force the browser to reload CSS files? I'm building an xml page (on an apache2) that is supposed to be translated to xhtml by the browser, so my server also serves a main.xslt which is used as stylesheet by the xml file, similar to the scenario with the css files in the linked question. However, none of tricks provided in either that answer, nor some issues on SO solve the issue for Opera. While Firefox responds to F5 by fetching not only the xml file but also the xslt file, Opera only reloads the xml file. I tried both, setting the Last-Modified HTTP header via an .htaccess file and using the expires module of apache2. This is what my .htaccess looks right now: AddType text/xsl;charset=utf-8 .xslt ExpiresByType text/xsl "modification plus 1 second" Header set Last-Modified "Wed, 08 Jan 2000 23:11:55 GMT" #Header set Last-Modified "Wed, 08 Jan 2020 23:11:55 GMT" If I open the xsl myself and manually reload it, the xml presentation is updated as well, but this is tedious for development. Note: There is no php or any kind of scripting involved. Everything is static.

    Read the article

  • Is my concept in open source license correct?

    - by tester
    I would like to justify whether my concept in the open source license is correct, as you know that, misunderstanding the terms may lead to a serious law sue. Thank you. The main difference among the open source license is whether the license is copyleft. Copyleft license means allow the others to reproduce, modify and distribute the products but the released product is bound by the same licensing restriction. That means they have to use the same license for the modified version. Also, the copyleft license require all the released modified version to be free software. On the other hand, if any others create derived work incorporating non-copyleft licensed code, they can choose any license for the code. The serveral kinds of license and comparsion GPL is a restrictive license. Software requires to released as GPL license if that integrate or is modified from the other GPL license software . The library used in developing GPL license software are also restricted to GPL and LGPL , proprietary software are not allowed to employ (or complied with) in any part of the GPL application. LGPL is similar to GPL , but was more permissive with regarding allow the using of other non-GPL software. BSD is relatively simple license, it allow developer to do anything on the original source code . The license holder do not hold any legal responsibilities for their released product. Apache license is evolved from the BSD license. The legal terms are improved and are written by legal professionals in a more modern way. It covers comprehensive intellectual property ownership and liability issues. Also, are there any popular license beside these? Thank you

    Read the article

  • Google Chrome audit on caching

    - by Álvaro G. Vicario
    If I run an audit on my sites with Google Chrome, I get this message in the Leverage browser caching section: The following resources are missing a cache expiration. Resources that do not specify an expiration may not be cached by browsers: A list of all the pictures follows. I get a similar notice in Leverage proxy caching: Consider adding a "Cache-Control: public" header to the following resources: Apart from pictures, I also get a notice about HTML, CSS and JavaScript files: The following resources are explicitly non-cacheable. Consider making them cacheable if possible: Its funny because I've worked hard to cache all static contents (except for pictures, where I just left Apache's default settings). Firefox does indeed store all these items in cache. Is there anything I should improve in my HTTP headers? Here's the complete header set of some items as loaded after removing the browser caché. Pictures use default settings I didn't really check before, the rest should be cachéd for three hours. I can set headers with both .htaccess and PHP. PNG HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:14 GMT Server: Apache Last-Modified: Thu, 18 Mar 2010 21:40:54 GMT Etag: "c48024-230-4821a15d6c580" Accept-Ranges: bytes Content-Length: 560 Keep-Alive: timeout=4 Connection: Keep-Alive Content-Type: image/png HTML HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:46:13 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:46:13 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Wed, 24 Mar 2010 20:30:36 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-15 CSS HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/css JavaScript HTTP/1.1 200 OK Date: Sat, 31 Jul 2010 12:48:21 GMT Server: Apache X-Powered-By: PHP/5.2.11 Expires: Sat, 31 Jul 2010 15:48:21 GMT Cache-Control: max-age=10800, s-maxage=10800, must-revalidate, proxy-revalidate Content-Encoding: gzip Vary: Accept-Encoding Last-Modified: Thu, 18 Mar 2010 21:40:12 GMT Keep-Alive: timeout=4 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: application/x-javascript Update I've tested Jumby's suggestion and set my CSS's expire to 1 year: Cache-Control:max-age=31536000, s-maxage=31536000, must-revalidate, proxy-revalidate Connection:Keep-Alive Content-Encoding:gzip Content-Length:4198 Content-Type:text/css Date:Mon, 02 Aug 2010 20:48:56 GMT Expires:Tue, 02 Aug 2011 20:48:56 GMT Keep-Alive:timeout=5, max=99 Last-Modified:Thu, 18 Mar 2010 20:40:12 GMT Server:Apache/2.2.14 (Win32) PHP/5.3.1 Vary:Accept-Encoding X-Powered-By:PHP/5.3.1 However, Chrome still claims "explicitly non-cacheable".

    Read the article

  • SQL Server Reporting Services Data Extention

    - by Vercinegetorix
    Hello! So... here's my story: I'm trying to create a SQL server data extension (to be precise, I'm trying to get some sample code to run) (SSRS2005). I've done the following: Placed the extension assembly into the ReportServer/bin folder. Placed the assembly into the Private Assemblies folder. Modified rsreportserver.config in, and added the assembly info to the data section. Modified rssrvpolicy.config, and added a code group for the assembly with Full Trust. Modified RSReportDesigner.config in PrivateAssemblies. Added the assembly to the data and the designer sections, specifying the generic query designer. Modified RSPreviewPolicy.config. Added the assembly with Full Trust. The new Data Source type is available for selection, but when I try to view the dataset I get this error: The data extension DataSet could not be loaded. Check the configuration file RSReportDesigner.config. The location of the assembly is configured properly (I think), because I've added logging code and I can see that the constructor of the Connection object is being called. In fact, I've added logging code to every method of every class in the assembly, and as far as I can tell the failure occurs right after the connection object's constructor is called. Any ideas on how I might proceed to debug this? Thanks alot!

    Read the article

  • client-server syncing methodology [theoretical]

    - by Kenneth Ballenegger
    I'm in the progress of building an web-app that syncs with an iOS client. I'm currently tackling trying to figure out how to go about about syncing. I've come up with following two directions: I've got a fairly simple server web-app with a list of items. They are ordered by date modified and as such syncing the order does not matter. One direction I'm considering is to let the client deal with syncing. I've already got an API that lets the client get the data, as well as do certain actions on it, such as update, add or remove single items. I was considering: 1) on each sync asking the server for all items modified since the last successful sync and updating the local records based on what's returned by the server, and 2) building a persistent queue of create / remove / update requests on the client, and keeping them until confirmation by the server. The risk with this approach is that I'm basically asking each side to send changes to the other side, hoping it works smoothly, but risking a diversion at some point. This would probably be more bandwidth-efficient, though. The other direction I was considering was a more traditional model. I would have a "sync" process in which the client would send its whole list to the server (or a subset since last modified sync), the server would update the data on the server (by fixing conflicts by keeping the last modified item, and keeping deleted items with a deleted = 1 field), and the server would return an updated list of items (since last successful sync) which the client would then replace its data with. Thoughts?

    Read the article

  • Is there a problem with this MySQL Query?

    - by ThinkingInBits
    http://pastie.org/954073 The echos at the top all display valid data that fit the database schema. There are no connection errors Any ideas? Thanks in advance! Here is the echo'ed query INSERT INTO equipment (cat_id, name, year, manufacturer, model, price, location, condition, stock_num, information, description, created, modified) VALUES (1, 'r', 1, 'sdf', 'sdf', '2', 'd', 'd', '3', 'asdfasdfdf', 'df', '10 May 10', '10 May 10') MySQL is giving: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'condition, stock_num, information, description, created, modified) VALUES (1, 'r' at line 1 id int(11) unsigned NO PRI NULL auto_increment Edit Delete cat_id int(11) unsigned NO NULL Edit Delete prod_name varchar(255) YES NULL Edit Delete prod_year varchar(10) YES NULL Edit Delete manufacturer varchar(255) YES NULL Edit Delete model varchar(255) YES NULL Edit Delete price varchar(10) YES NULL Edit Delete location varchar(255) YES NULL Edit Delete condition varchar(25) YES NULL Edit Delete stock_num varchar(128) YES NULL Edit Delete information text YES NULL Edit Delete description text YES NULL Edit Delete created varchar(20) YES NULL Edit Delete modified varchar(20) YES NULL Query: INSERT INTO equipment (cat_id, prod_name, prod_year, manufacturer, model, price, location, condition, stock_num, information, description, created, modified) VALUES (1, 'asdf', '234', 'adf', 'asdf', '34', 'asdf', 'asdf', '234', 'asdf', 'asdf', '10 May 10', '10 May 10')

    Read the article

  • Git: Create a branch from unstagged/uncommited changes on master

    - by knoopx
    Context: I'm working on master adding a simple feature. After a few minutes I realize it was not so simple and it should have been better to work into a new branch. This always happens to me and I have no idea how to switch to another branch and take all these uncommited changes with me leaving the master branch clean. I supposed git stash && git stash branch new_branch would simply accomplish that but this is what I get: ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ echo "hello!" > testing ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git stash Saved working directory and index state WIP on master: 4402b8c testing HEAD is now at 4402b8c testing ~/test $ git status # On branch master nothing to commit (working directory clean) ~/test $ git stash branch new_branch Switched to a new branch 'new_branch' # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Dropped refs/stash@{0} (db1b9a3391a82d86c9fdd26dab095ba9b820e35b) ~/test $ git s # On branch new_branch # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") ~/test $ git checkout master M testing Switched to branch 'master' ~/test $ git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: testing # no changes added to commit (use "git add" and/or "git commit -a") Do you know if there is any way of accomplishing this?

    Read the article

  • updating dataset using join and bindingsource?

    - by netadictos
    Hi, I have created a dataset and in the designer I have created the relations and foreign keys that exist in the database. Basically, I have a product that has a relationship to a table of prices. The keyfield they share is IdProduct in the Prices table. In the Fill/Get of the product I return the Price field. I also have a DataGrid that uses a BindingSource which uses this table. Everything displays correctly and when I double click on a row within the datagrid I then open up a tabbed form that contains a detailed view of the record selected. The user at this point is able to make changes to the record and they are properly propogated back to the BindingSource. The problem is that the TableAdapter does not contain the appopriate update, therefore I am not able to call the TableAdapter.Update method with the dataset as I would had I created a tableadapter not using a join. How am I best to handle this situation. At the same time I cannot get any modified row: dTiendasDs.ProductosDataTable modified = (dTiendasDs.ProductosDataTable) dTiendasDs.Productos.GetChanges(DataRowState.Modified); modified is always null Thanks,

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >