Search Results

Search found 1271 results on 51 pages for 'negative lookbehind'.

Page 43/51 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • matching certain numbers at the end of a string

    - by user697473
    I have a vector of strings: s <- c('abc1', 'abc2', 'abc3', 'abc11', 'abc12', 'abcde1', 'abcde2', 'abcde3', 'abcde11', 'abcde12', 'nonsense') I would like a regular expression to match only the strings that begin with abc and end with 3, 11, or 12. In other words, the regex has to exclude abc1 but not abc11, abc2 but not abc12, and so on. I thought that this would be easy to do with lookahead assertions, but I haven't found a way. Is there one? EDIT: Thanks to posters below for pointing out a serious ambiguity in the original post. In reality, I have many strings. They all end in digits: some in 0, some in 9, some in the digits in between. I am looking for a regex that will match all strings except those that end with a letter followed by a 1 or a 2. (The regex should also match only those strings that start with abc, but that's an easy problem.) I tried to use negative lookahead assertions to create such a regex. But I didn't have any success.

    Read the article

  • Practice of checking 'trueness' or 'equality' of conditional statements - does it really make sense?

    - by senthilkumar1033
    I remember many years back, when I was in school, one of my computer science teachers taught us that it was better to check for 'trueness' or 'equality' of a condition and not the negative stuff like 'inequality'. Let me elaborate - If a piece of conditional code can be written by checking whether an expression is true or false, we should check the 'trueness'. Example: Finding out whether a number is odd - it can be done in two ways: if ( num % 2 != 0 ) { // Number is odd } or if ( num % 2 == 1 ) { // Number is odd } When I was beginning to code, I knew that num % 2 == 0 implies the number is even, so I just put a ! there to check if it is odd. But he was like 'Don't check NOT conditions. Have the practice of checking the 'trueness' or 'equality' of conditions whenever possible.' And he recommended that I use the second piece of code. I am not for or against either but I just wanted to know - what difference does it make? Please don't reply 'Technically the output will be the same' - we ALL know that. Is it a general programming practice or is it his own programming practice that he is preaching to others?

    Read the article

  • Populate array from vector

    - by Zag zag..
    Hi, I would like to populate an 2 dimensional array, from a vector. I think the best way to explain myself is to put some examples (with a array of [3,5] length). When vector is: [1, 0] [ [4, 3, 2, 1, 0], [4, 3, 2, 1, 0], [4, 3, 2, 1, 0] ] When vector is: [-1, 0] [ [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4] ] When vector is: [-2, 0] [ [0, 0, 1, 1, 2], [0, 0, 1, 1, 2], [0, 0, 1, 1, 2] ] When vector is: [1, 1] [ [2, 2, 2, 1, 0], [1, 1, 1, 1, 0], [0, 0, 0, 0, 0] ] When vector is: [0, 1] [ [2, 2, 2, 2, 2], [1, 1, 1, 1, 1], [0, 0, 0, 0, 0] ] Have you got any ideas, a good library or a plan? Any comments are welcome. Thanks. Note: I consulted Ruby "Matrix" and "Vector" classes, but I don't see any way to use it in my way... Edit: In fact, each value is the number of cells (from the current cell to the last cell) according to the given vector. If we take the example where the vector is [-2, 0], with the value *1* (at array[2, 3]): array = [ [<0>, <0>, <1>, <1>, <2>], [<0>, <0>, <1>, <1>, <2>], [<0>, <0>, <1>, *1*, <2>] ] ... we could think such as: The vector [-2, 0] means that -2 is for cols and 0 is for rows. So if we are in array[2, 3], we can move 1 time on the left (left because 2 is negative) with 2 length (because -2.abs == 2). And we don't move on the top or bottom, because of 0 for rows.

    Read the article

  • Reverse wildcard search in codeigniter

    - by Andy Platt
    I am implementing a snippet-based content management system into my current project. Snippets can be associated with a page either by exact match of the full url falling back to a wildcard match on a partial url or finally a default snippet. To implement this I have a created table of page associations with a wildcard flag, the system first checks the current url against the non-wildcard associations and if it doesn't find a match it checks against the partial url's where the wildcard flag is set. In order to achieve this I am getting all the partial url's from the database and putting them into an array then walking the array to check for a match against the current url: protected function _check_wildcard($url = NULL) { if($url) { $q = $this->db->where('wildcard' ,'Y') ->from('content') ->get(); $wildcards = $q->result_array(); foreach($wildcards AS $wildcard) { if(strpos($url,$wildcard['url']) > 0) { return $wildcard['snipppet_id']; } } } else { return NULL; } } Can anyone suggest a better way to do this - preferably one that doesn't involve having to constantly download the full list of all the wildcards each time I load a page as I am afraid that this will have a negative effect on the scalability of the system down the line?

    Read the article

  • Is it possible in SQLAlchemy to filter by a database function or stored procedure?

    - by Rico Suave
    We're using SQLalchemy in a project with a legacy database. The database has functions/stored procedures. In the past we used raw SQL and we could use these functions as filters in our queries. I would like to do the same for SQLAlchemy queries if possible. I have read about the @hybrid_property, but some of these functions need one or more parameters, for example; I have a User model that has a JOIN to a bunch of historical records. These historical records for this user, have a date and a debit and credit field, so we can look up the balance of a user at a specific point in time, by doing a SUM(credit) - SUM(debit) up until the given date. We have a database function for that called dbo.Balance(user_id, date_time). I can use this to check the balance of a user at a given point in time. I would like to use this as a criterium in a query, to select only users that have a negative balance at a specific date/time. selection = users.filter(coalesce(Users.status, 0) == 1, coalesce(Users.no_reminders, 0) == 0, dbo.pplBalance(Users.user_id, datetime.datetime.now()) < -0.01).all() This is of course a non-working example, just for you to get the gist of what I'd like to do. The solution looks to be to use hybrd properties, but as I mentioned above, these only work without parameters (as they are properties, not methods). Any suggestions on how to implement something like this (if it's even possible) are welcome. Thanks,

    Read the article

  • I am trying to access the individual bytes in a floating point number and I am getting unexpected results

    - by oweinh
    So I have this so far: #include <iostream> #include <string> #include <typeinfo> using namespace std; int main () { float f = 3.45; // just an example fp# char* ptr = (char*)&f; // a character pointer to the first byte of the fp#? cout << int(ptr[0]) << endl; // these lines are just to see if I get what I cout << int(ptr[1]) << endl; // am looking for... I want ints that I can cout << int(ptr[2]) << endl; // otherwise manipulate. cout << int(ptr[3]) << endl; } the result is: -51 -52 92 64 so obviously -51 and -52 are not in the byte range that I would expect for a char... I have taken information from similar questions to arrive at this code and from all discussions, a conversion from char to int is straightforward. So why negative values? I am trying to look at a four-byte number, therefore I would expect 4 integers, each in the range 0-255. I am using Codeblocks 13.12 with gcc 4.8.1 with option -std=C++11 on a Windows 8.1 device.

    Read the article

  • VB Change Calulator

    - by BlueBeast
    I am creating a VB 2008 change calculator as an assignment. The program is to use the amount paid - the amount due to calculate the total.(this is working fine). After that, it is to break that amount down into dollars, quarters, dimes, nickels, and pennies. The problem I am having is that sometimes the quantity of pennies, nickels or dimes will be a negative number. For example $2.99 = 3 Dollars and -1 Pennies. SOLVED Thanks to the responses, here is what I was able to make work with my limited knowledge. Option Explicit On Option Strict Off Option Infer Off Public Class frmMain Private Sub btnClear_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnClear.Click 'Clear boxes lblDollarsAmount.Text = String.Empty lblQuartersAmount.Text = String.Empty lblDimesAmount.Text = String.Empty lblNickelsAmount.Text = String.Empty lblPenniesAmount.Text = String.Empty txtOwed.Text = String.Empty txtPaid.Text = String.Empty lblAmountDue.Text = String.Empty txtOwed.Focus() End Sub Private Sub btnExit_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnExit.Click 'Close application' Me.Close() End Sub Private Sub btnCalculate_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnCalculate.Click ' Find Difference between Total Price and Total Received lblAmountDue.Text = Val(txtPaid.Text) - Val(txtOwed.Text) Dim intChangeAmount As Integer = lblAmountDue.Text * 100 'Declare Integers Dim intDollarsBack As Integer Dim intQuartersBack As Integer Dim intDimesBack As Integer Dim intNickelsBack As Integer Dim intPenniesBack As Integer ' Change Values Const intDollarValue As Integer = 100 Const intQuarterValue As Integer = 25 Const intDimeValue As Integer = 10 Const intNickelValue As Integer = 5 Const intPennyValue As Integer = 1 'Dollars intDollarsBack = CInt(Val(intChangeAmount \ intDollarValue)) intChangeAmount = intChangeAmount - Val(Val(intDollarsBack) * intDollarValue) lblDollarsAmount.Text = intDollarsBack.ToString 'Quarters intQuartersBack = CInt(Val(intChangeAmount \ intQuarterValue)) intChangeAmount = intChangeAmount - Val(Val(intQuartersBack) * intQuarterValue) lblQuartersAmount.Text = intQuartersBack.ToString 'Dimes intDimesBack = CInt(Val(intChangeAmount \ intDimeValue)) intChangeAmount = intChangeAmount - Val(Val(intDimesBack) * intDimeValue) lblDimesAmount.Text = intDimesBack.ToString 'Nickels intNickelsBack = CInt(Val(intChangeAmount \ intNickelValue)) intChangeAmount = intChangeAmount - Val(Val(intNickelsBack) * intNickelValue) lblNickelsAmount.Text = intNickelsBack.ToString 'Pennies intPenniesBack = CInt(Val(intChangeAmount \ intPennyValue)) intChangeAmount = intChangeAmount - Val(Val(intPenniesBack) * intPennyValue) lblPenniesAmount.Text = intPenniesBack.ToString End Sub End Class

    Read the article

  • using own mail server with external domain and dns. Now have internal dns. dkim test not working

    - by mojotaker
    I am not very knowledgeable in this area, but have been able to make great head way. Now i am stuck I setup my own mail server, e.g mailbox.example.com. I had the domain dns point to my mail server in my office. i was able to set up everything working fine. such as dkim and spf records. Recently i decided to setup an internal dns server in the office so as to resolve some addresses for some development servers internally. Ok the problem now is my mail server is sitting on the internal dns server (the mail server is on the same box as the dns server) its still able to send and receive emails but not sure if dkim is working properly. when i try to do a dkim test "amavisd test keys" i get "invalid (public key: not available)" and i know that that means i have a dns issue. so what should i do? I am currently looking at my internal dns zonefile and i dont know what to do (i am using bind dns server on an ubuntu-server box). do i configure a dkim txt record on the local dns ? or is there a way to forward dkim "request" to the external dns ? or do i have this whole thing done wrong ? To be clear Basically my internal domain name is the same as my external domain name (i.e example.com) i have a mail server within my internal domain mailbox.example.com, that uses my external domain dns (external dns has been setup to point to my emailserver (which of course is now sitting behind my internal dns)) dkim (i dont think its working because it fails the dkim test") Need help in determing the proper setup What is the proper way to set this up ? thank you Update: Here is my local dns zone file ; ; BIND data file for local loopback interface ; $TTL 604800 @ IN SOA webserver.example.com. root.example.com. ( //dns and webserver on the same box 2012030809 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS webserver.example.com. @ IN A 192.168.1.117 @ IN AAAA ::1 ns IN A 192.168.1.117 www IN A xx.xx.xx.xxx // ip of external domain box (bluehost) work around to let local clients access website newsletter IN A xx.xx.xxx.117 // external ip address of local network mailbox.example.com. IN A 192.168.1.111 // internal ip of mailbox (mailserver webserver.example.com. IN A 192.168.1.117 //internal ip of a webserver

    Read the article

  • Windows explorer locks files

    - by John Prince
    I'm using Office 2010 & Windows 7 Home Premium 64-bit. My problem starts when I attempt to save e-mail messages to my PC that I have received via Outlook (my ISP is Comcast). I'm using the default .msg file extension option when I attempt to save these e-mails. The resultant files are locked and do not show the normal "envelope" icon. Instead, it’s a “blank page” icon with the right upper corner folded in. These files refuse to open either by double clicking on them or right clicking and trying to open them with Outlook. And when I return to Outlook, I discover that Outlook is now hung up and I have to close it via the Task Manager. To make matters worse, I’ve also discovered that every e-mail message that I've saved on my PC over the years has also somehow become locked and their original "envelope" icon has been replaced with the "blank page" icon. I found and installed an application called LockHunter. As a result, when I right click on a saved and locked e-mail message, I’ve given an option to find out what's locking it. Each time I'm told that the culprit is Windows explorer.exe. When I unlock the file the normal envelope icon is sometimes displayed (but not always) but at least the file can then be opened. But the file is still “squirrely” as it can’t be moved or saved to a folder until it’s unlocked again. On this second attempt, LockHunter says it’s now locked by Outlook.exe. By the way, I don't have this issue when I save Word, Excel & PowerPoint files; only with Outlook. I've exhausted every remedy that I can think of including: making sure that the file and folder options are checked to always show icons and not thumbnails; running the Windows 7 & Office 2010 repair options which find nothing amiss; running a complete system scan with Windows Malicious Software Removal Tool with negative results; verifying that Outlook is the default for opening e-mails; updating all of my applications via Secunia Personal Software Inspector; uninstalling every application that I felt was unnecessary; doing a registry cleanup via CC Cleaner; having Windows Security Essentials always on (it did find one Java Trojan recently which was quarantined and then deleted); uninstalling a bunch of non-Microsoft shell extensions; and deactivating all of the Outlook Add-ins and then re-activating each one. None of this solves the problem. I’d welcome any advice on how to resolve this.

    Read the article

  • How to block subreddits with BIND9?

    - by user1391189
    Please help me block NSFW subreddits like this one (http://www.reddit.com/r/NSFW/) I would like to keep access to SFW subreddits, but block certain subreddits that are distracting or NSFW. I know how to filter domains. (see files below) But how do I apply the filter only to certain subreddits? So far I have set up the following files: blocklist.conf zone "adimages.go.com" { type master; file "dummy-block"; }; zone "admonitor.net" { type master; file "dummy-block"; }; zone "ads.specificpop.com" { type master; file "dummy-block"; }; ... named.conf options { allow-query { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; directory "c:\bind\etc"; notify no; }; zone "." IN { type hint; file "c:\bind\etc\named.root"; }; zone "localhost" IN { allow-update { none; }; file "c:\bind\etc\localhost.zone"; type master; }; zone "0.0.127.in-addr.arpa" IN { allow-update { none; }; file "c:\bind\etc\named.local"; type master; }; key "rndc-key" { algorithm hmac-md5; secret "O5VdbBKKEMzuLYjM60CxwuLLURFA6peDYHCBvZCqjoa6KtL1ggD7OTLeLtnu2jR5I5cwA/MQ8UdHc+9tMJRSiw=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; //Blocklist include "c:\bind\etc\blocklist.conf"; dummy-block $TTL 604800 @ IN SOA localhost. root.localhost. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS localhost. @ IN A 127.0.0.1 * IN A 127.0.0.1

    Read the article

  • Block IP Address including ICMP using UFW

    - by dr jimbob
    I prefer ufw to iptables for configuring my software firewall. After reading about this vulnerability also on askubuntu, I decided to block the fixed IP of the control server: 212.7.208.65. I don't think I'm vulnerable to this particular worm (and understand the IP could easily change), but wanted to answer this particular comment about how you would configure a firewall to block it. I planned on using: # sudo ufw deny to 212.7.208.65 # sudo ufw deny from 212.7.208.65 However as a test that the rules were working, I tried pinging after I setup the rules and saw that my default ufw settings let ICMP through even from an IP address set to REJECT or DENY. # ping 212.7.208.65 PING 212.7.208.65 (212.7.208.65) 56(84) bytes of data. 64 bytes from 212.7.208.65: icmp_seq=1 ttl=52 time=79.6 ms ^C --- 212.7.208.65 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 79.630/79.630/79.630/0.000 ms Now, I'm worried that my ICMP settings are too generous (conceivably this or a future worm could setup an ICMP tunnel to bypass my firewall rules). I believe this is the relevant part of my iptables rules is given below (and even though grep doesn't show it; the rules are associated with the chains shown): # sudo iptables -L -n | grep -E '(INPUT|user-input|before-input|icmp |212.7.208.65)' Chain INPUT (policy DROP) ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-before-input (1 references) ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 4 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 12 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ufw-user-input all -- 0.0.0.0/0 0.0.0.0/0 Chain ufw-user-input (1 references) DROP all -- 0.0.0.0/0 212.7.208.65 DROP all -- 212.7.208.65 0.0.0.0/0 How should I go about making it so ufw blocks ICMP when I specifically attempt to block an IP address? My /etc/ufw/before.rules has in part: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ACCEPT -A ufw-before-input -p icmp --icmp-type source-quench -j ACCEPT -A ufw-before-input -p icmp --icmp-type time-exceeded -j ACCEPT -A ufw-before-input -p icmp --icmp-type parameter-problem -j ACCEPT -A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT I'm tried changing ACCEPT above to ufw-user-input: # ok icmp codes -A ufw-before-input -p icmp --icmp-type destination-unreachable -j ufw-user-input -A ufw-before-input -p icmp --icmp-type source-quench -j ufw-user-input -A ufw-before-input -p icmp --icmp-type time-exceeded -j ufw-user-input -A ufw-before-input -p icmp --icmp-type parameter-problem -j ufw-user-input -A ufw-before-input -p icmp --icmp-type echo-request -j ufw-user-input But ufw wouldn't restart after that. I'm not sure why (still troubleshooting) and also not sure if this is sensible? Will there be any negative effects (besides forcing the software firewall to force ICMP through a few more rules)?

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • Hidden DNS master only sending notify to one slave

    - by Rob
    My hidden DNS master is only sending notifies to one of the name servers for a zone I have 3 named servers ns0,ns1 & ns2 all running bind 9.7.3.dfsg-1ubuntu4.1. When an update is processed the master (ns0) seems to behave normally. ns0 (192.168.2.50) zone domain.org/IN: sending notifies (serial 2012060703) client 192.168.2.52#42892: transfer of 'domain.org/IN': AXFR-style IXFR started: TSIG rndc-key client 192.168.2.52#42892: transfer of 'domain.org/IN': AXFR-style IXFR ended ns2 (192.168.2.52) client 192.168.2.50#3762: received notify for zone 'domain.org': TSIG 'rndc-key' zone domain.org/IN: Transfer started. transfer of 'domain.org/IN' from 192.168.2.50#53: connected using 192.168.2.52#55747 zone domain.org/IN: transferred serial 2012060704: TSIG 'rndc-key' transfer of 'domain.org/IN' from 192.168.2.50#53: Transfer completed: 1 messages, 34 records, 1028 bytes, 0.001 secs (1028000 bytes/sec) Nothing happens on ns1. I've turned up the logging level but there's no information in syslog about the actual name servers bind has sent notifications to so I guess this is something it doesn't log. I've also tried watching tcpdump, it never makes any attempt to notify ns1 only ns2 192.168.2.50.56278 > 192.168.2.52.53: [udp sum ok] 56418 notify [b2&3=0x2400] [1a] [1au] ? SOA? domain.org. domain.org. [0s] SOA ns1.domain.net. dnsmaster.domain.net. ? 2012060801 10800 3600 604800 3600 ar: rndc-key. ANY [0s] TSIG hmac-md5.sig-alg.reg.int. fudge=300 maclen=16 origid=56418 error=0 otherlen=0 (174) the authoritive zone has both ns1 and ns2 records $ORIGIN domain.org. $TTL 3h @ IN SOA ns1.domain.net. dnsmaster.domain.net. ( 2012060801 ; Serial yyyymmddnn 3h ; Refresh After 3 hours 1h ; Retry Retry after 1 hour 1w ; Expire after 1 week 1h ) ; Minimum negative caching of 1 hour @ 3600 IN NS ns1.domain.net. @ 3600 IN NS ns2.domain.net. // Edit I have added also-notify {192.168.2.51;192.168.2.52;}; explicitly to the zone file and it all works fine, both ns1 and ns2 get notify messages and transfers succeed. I was under the impression bind would automatically send notifies to all NS records on a zone, maybe it's bugged?

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

  • Incremental RPM package version "numbers" for x.y.z > x.y.z-beta (or alpha, rc, etc)

    - by Jonathan Clarke
    In order to publish RPM packages of several different versions of some software, I'm looking for a way to specify version "numbers" that are considered "upgrades", and include the differentiation of several pre-release versions, such as (in order): "2.4.0 alpha 1", "2.4.0 alpha 2", "2.4.0 alpha 3", "2.4.0 beta 1", "2.4.0 beta 2", "2.4.0 release candidate", "2.4.0 final", "2.4.1", "2.4.2", etc. The main issue I have with this is that RPM considers that "2.4.0" comes earlier than "2.4.0.alpha1", so I can't just add the suffix on the end of the final version number. I could try "2.4.0.alpha1", "2.4.0.beta1", "2.4.0.final", which would work, except for the "release candidate" that would be considered later than "2.4.0.final". An alternative I considered is using the "epoch:" section of the RPM version number (the epoch: prefix is considered before the main version number so that "1:2.4.0" is actually earlier than "2:1.0.0"). By putting a timestamp in the epoch: field, all the versions get ordered as expected by RPM, because their versions appear to increment in time. However, this fails when new releases are made on several major versions at the same time (for example, 2.3.2 is released after 2.4.0, but their version for RPM are "20121003:2.3.2" and "20120928:2.4.0" and systems on 2.3.2 can't get "upgraded" to 2.4.0, because rpm sees it as an older version). In this case, yum/zypper/etc refuse to upgrade to 2.4.0, thus my problem. What version numbers can I use to achieve this, and make sure that RPM always considers the version numbers to be in order. Or if not version numbers, other mechanism in RPM packaging? Note 1: I would like to keep the "Release:" field of the spec file for it's original purpose (several releases of packages, including packaging changes, for the same version of the packaged software). Note 2: This should work on current production versions of major distributions, such as RHEL/CentOS 6 and SLES 11. But I'm interested in solutions that don't, too, so long as they don't involve recompiling rpm! Note 3: On Debian-like systems, dpkg uses a special component in the version number which is the "~" (tilde) character. This causes dpkg to count the suffix as "negative" ordering, so that "2.4.0~anything" will come before "2.4.0". Then, normal ordering applies after the "~", so "2.4.0~alpha1" comes before "2.4.0~beta1" because "alpha" comes before "beta" alphabetically. I'm not necessarily looking to use the same scheme for RPM packages (I'm pretty sure no such equivalent exists), so this is just FYI.

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – SSMS: Memory Usage By Memory Optimized Objects Report

    - by Pinal Dave
    At conferences and at speaking engagements at the local UG, there is one question that keeps on coming which I wish were never asked. The question around, “Why is SQL Server using up all the memory and not releasing even when idle?” Well, the answer can be long and with the release of SQL Server 2014, this got even more complicated. This release of SQL Server 2014 has the option of introducing In-Memory OLTP which is completely new concept and our dependency on memory has increased multifold. In reality, nothing much changes but we have memory optimized objects (Tables and Stored Procedures) additional which are residing completely in memory and improving performance. As a DBA, it is humanly impossible to get a hang of all the innovations and the new features introduced in the next version. So today’s blog is around the report added to SSMS which gives a high level view of this new feature addition. This reports is available only from SQL Server 2014 onwards because the feature was introduced in SQL Server 2014. Earlier versions of SQL Server Management Studio would not show the report in the list. If we try to launch the report on the database which is not having In-Memory File group defined, then we would see the message in report. To demonstrate, I have created new fresh database called MemoryOptimizedDB with no special file group. Here is the query used to identify whether a database has memory-optimized file group or not. SELECT TOP(1) 1 FROM sys.filegroups FG WHERE FG.[type] = 'FX' Once we add filegroup using below command, we would see different version of report. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILEGROUP [IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA GO The report is still empty because we have not defined any Memory Optimized table in the database.  Total allocated size is shown as 0 MB. Now, let’s add the folder location into the filegroup and also created few in-memory tables. We have used the nomenclature of IMO to denote “InMemory Optimized” objects. USE [master] GO ALTER DATABASE [MemoryOptimizedDB] ADD FILE ( NAME = N'MemoryOptimizedDB_IMO', FILENAME = N'E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MemoryOptimizedDB_IMO') TO FILEGROUP [IMO_FG] GO You may have to change the path based on your SQL Server configuration. Below is the script to create the table. USE MemoryOptimizedDB GO --Drop table if it already exists. IF OBJECT_ID('dbo.SQLAuthority','U') IS NOT NULL DROP TABLE dbo.SQLAuthority GO CREATE TABLE dbo.SQLAuthority ( ID INT IDENTITY NOT NULL, Name CHAR(500)  COLLATE Latin1_General_100_BIN2 NOT NULL DEFAULT 'Pinal', CONSTRAINT PK_SQLAuthority_ID PRIMARY KEY NONCLUSTERED (ID), INDEX hash_index_sample_memoryoptimizedtable_c2 HASH (Name) WITH (BUCKET_COUNT = 131072) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO As soon as above script is executed, table and index both are created. If we run the report again, we would see something like below. Notice that table memory is zero but index is using memory. This is due to the fact that hash index needs memory to manage the buckets created. So even if table is empty, index would consume memory. More about the internals of how In-Memory indexes and tables work will be reserved for future posts. Now, use below script to populate the table with 10000 rows INSERT INTO SQLAuthority VALUES (DEFAULT) GO 10000 Here is the same report after inserting 1000 rows into our InMemory table.    There are total three sections in the whole report. Total Memory consumed by In-Memory Objects Pie chart showing memory distribution based on type of consumer – table, index and system. Details of memory usage by each table. The information about all three is taken from one single DMV, sys.dm_db_xtp_table_memory_stats This DMV contains memory usage statistics for both user and system In-Memory tables. If we query the DMV and look at data, we can easily notice that the system tables have negative object IDs.  So, to look at user table memory usage, below is the over-simplified version of query. USE MemoryOptimizedDB GO SELECT OBJECT_NAME(OBJECT_ID), * FROM sys.dm_db_xtp_table_memory_stats WHERE OBJECT_ID > 0 GO This report would help DBA to identify which in-memory object taking lot of memory which can be used as a pointer for designing solution. I am sure in future we will discuss at lengths the whole concept of In-Memory tables in detail over this blog. To read more about In-Memory OLTP, have a look at In-Memory OLTP Series at Balmukund’s Blog. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Memory, SQL Reports

    Read the article

  • Initial Review - Mastering Unreal Technology I: Introduction to Level Design with Unreal Engine 3

    - by Matt Christian
    Recently I purchased 3 large volumes on using the Unreal 3 Engine to create levels and custom games.  This past weekend I cracked the spine of the first and started reading.  Here are my early impressions (I'm ~250 pages into it, with appendices it's about 900). Pros Interestingly, the book starts with an overview of the Unreal engines leading up to Unreal 3 (including Gears of War) and follows with some discussion on planning a mod and what goes into the game development process.  This is nice for an intro to the book and is much preferred rather than a simple chapter detailing what is on the included CD, how to install and setup UnrealEd, etc...  While the chapter on Unreal history and planning can be considered 'fluff', it's much less 'fluffy' than most books provide. I need to mention one thing here that is pretty crucial in the way I'm going to continue reviewing this book.  Most technical books like this are used as a shelf reference; as a thick volume you use for looking up techniques every now and again.  Even so, I prefer reading from cover to cover, including chapters I may already be knowledgable on (I'm sure this is typical for most people).  If there was a chapter on installing UnrealEd (the previously mentioned 'fluff'), I would probably force myself to read it, even though I've installed the game and engine multiple times on different systems. Chapter 3 is where we really get to the introduction piece of UnrealEd, creating your first basic level.  This large chapter details creating two small rooms, adding static meshes, adding lighting, creating and adding particle emitters, creating a door that animates with Unreal Matinee and Kismet, static meshes with physics, and other little additions to make your level look less beginner.  This really is a chapter that overviews the entire rest of the book, as each chapter following details the creation and intermediate usages of Static Meshes, Materials, Lighting, etc... One other very nice part to this book is the way the tutorials are setup.  Each tutorial builds off the previous and all are step-by-step.  If you haven't completed one yet, you can find all the starting files on the CD that comes with the book. Cons While the description of the overview chapter (Chapter 3) is fresh in your mind, let me start the cons by saying this chapter is setup extremely confusing for the noob.  At one point, you end up creating a door mesh and setting it up as a InteropMesh so that it is ready to be animated, only to switch to particles and spend a good portion of time working on a different piece of the level.  Yes, this is actually how I develop my levels (jumping back and forth), though it's very odd for a book to jump out of sequence. The next item might be a positive or a negative depending on your skill level with UnrealEd.  Most of the introduction to the editor layout is found in one of the Appendices instead of before Chapter 3.  For new readers, this might lead to confusion as Appendix A would typically be read between Chapter 2 and 3.  However, this is a positive for those with some experience in UnrealEd as they don't have to force themselves through a 'learn every editor button' chapter.  I'm listing this in the Cons section as the book is 'Introduction to...' and is probably going to be directed toward a lot of very beginner developers. Finally, there's a lack of general description to a lot of the underlying engine and what each piece in UnrealEd is or does.  Sometimes you'll be performing Tutorial after Tutorial with barely a paragraph in between describing ANY of what you've just done.  Tutorial 1.1 Step 6 says to press Button X, so you do.  But why?  This is in part a problem with the structure of the tutorials rather than the content of the book.  Since the tutorials are so focused on a step-by-step (or procedural) description of a process, you learn the process and not why you're doing that.  For example, you might learn how to size a material to a surface, but will only learn what buttons to press and not what each one does. This becomes extremely apparent in the chapter on Static Meshes as most of the chapter is spent in 3D Studio Max.  Since there are books on 3DSM and modelling, the book really only tells you the steps and says to go grab a book on modelling if you're really interested in 3DSM.  Again, I've learned the process to develop my own meshes in 3DSM, but I don't know the why behind the steps. Conclusion So far the book is very good though I would have a hard time recommending it to a complete beginner.  I would suggest anyone looking at this book (obviously including the other 2, more advanced volumes) to pick up a copy of UDK or Unreal 3 (available online or via download services such as Steam) and watch some online tutorials and play with it first.  You'll find plenty of online videos available that were created by the authors and may suit as a better introduction to the editor.

    Read the article

  • SQL Saturday #44 Huntington Beach Recap

    What a great day. It was long and tiring, but rewarding in so many ways. On Sunday morning, I was driving home and I decided to take the Pacific Coast Highway from Huntington Beach.  It was a great chance to exhale and just enjoy the sun and smells of the beach (I really love SoCal sometimes). And for future reference for all you speakers, the beach and ocean are only 5 minutes from the SQL Saturday location.  I just could help noticing also the shocking number of high priced cars on the road (4 Bentleys, 3 Ferraris, 1 Aston Martins, 3 Maserati, 1 Rolls Royce, and 2 Lamborghinis).  It made me think about this: Price of all those cars: $ 150,000+.  Impacting the ability of people to learn: Priceless.  We have positively impacted the education, knowledge, capabilities of not only our attendees, but also all of their companies and people they might help as well.  That is just staggering and something to be immensely proud of. To all of my fellow community leaders, I salute you. So lets talk about the event Overall We had over 220 people register for the event and had 180+ people attend the event. I was shooting for the magical 200 number, but I guess it just gives us more motivation to make it even bigger and better next time. We had a few snags along the way, but what event doesnt, but I think everything turned out great. I did not hear any negative comments and heard lots of positive comments along with people asking when the next one is going to be (More on that later). Location- Golden West College We could not have asked for a better partner for the event. Herb Cohen from Golden West College was the wizard behind the curtains. From the beginning, he was our advocate to the GWC Board and was instrumental in getting our event approved. The day off, Herb was a HUGE help getting any and all logistics that we needed taken care of. In the craziness of the early morning registration crush it was a big help knowing that he and Bret Stateham (Blog | Twitter) were taking care of testing projectors in all the rooms. Anything we needed he was there and was even proactive in getting some things that I had not even thought of (i.e. a dumpster for all of our garbage). I cannot thank Herb enough along with other members of the GWC staff including Minnie Higgins of the Career and Technical Education Division office, Jack Taylor, public safety, and Ron Pryor, Tech Services Support. And last, but not least, the Wireless on campus was absolutely FANTASTIC! Some lessons learned Unless you are a glutton for punishment, as I no doubt am, you most certainly want to give yourself more than six weeks to plan the event. I am lucky that I have a very understanding wife and had a wonderful set of co-coordinators helping me out. A big thanks goes out to Phil, Marlon (Blog | Twitter), Nitin (Twitter), Thomas (Blog | Twitter), Bret (Blog | Twitter), Ben, and Laurie. Thankfully, the sponsor and speaker community was hugely supportive and we were able to fill out the entire event with speakers and sponsors. I have to say that there is not a lot that I would change after this years event. There are obviously going to be some things that we can do better or differently next time, but overall I think it was a great event and I was more than happy with the response we received from the community. Sponsors We obviously could not have put together our event without our sponsors. So certainly have to show them some love. Platinum Sponsors Quest Software http://www.quest.com My Space http://www.myspace.com/ Gold Strategy Companion http://www.strategycompanion.com Silver Fusion-IO http://www.fusionio.com Bronze WestClinTech http://westclintech.com Professional Association For SQL Server http://www.sqlpass.org Attunity http://www.attunity.com Sharepoint 360 http://www.sharepoint360.com Some additional Thanks Andy Warren (Blog | Twitter) Always there to answer my question and help out when I had some issues or questions with the website. The amount of work that he and everyone else put into SQL Saturday is very amazing. What a great gift to the community! Einstein Bros. Bagels They were our Breakfast Vendor and arrived perfectly on time with yummy bagels, sweets and most importantly coffee. Luccis Deli (http://www.luccisdeli.com) Luccis was out Lunch Vendor. They were great to work with and the food was excellent. They worked with us to give us a great price. Heard lots of great comments about the lunches. Definitely not your ordinary box lunch. Moving Forward Unfortunately, the work does not end after the event. We have a few things to clear up such as surveys, sponsor stuff, presentations uploaded to the website, expense reimbursement, stuff like that. Hopefully, all that should be cleared up within the next couple weeks. After that as a group we are going to get together and decide what our next steps are. We definitely want to keep some of the momentum that we are building as a SQL Community and channel that into future SQL Saturdays and other types of community events. In the meantime, for additional training be sure to check out your local User Group and PASS. San Diego SQL Server Users Group ( http://www.sdsqlug.org/home/index.cfm ) Orange County SQL Server Users Group ( http://www.sqloc.com/ ) L.A. SQL Server Users Group ( http://www.sql.la/ ) SQL PASS ( http://www.sqlpass.org/ ) 24 Hours of PASS ( http://www.sqlpass.org/24hours/2010/ ) So stay tuned, there will be more events to come in SoCal!!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • OSB/OSR/OER in One Domain - QName violates loader constraints

    - by John Graves
    For demos, testing and prototyping, I wanted a single domain which contained three servers:OSB - Oracle Service BusOSR - Oracle Service RegistryOER - Oracle Enterprise Repository These three can work together to help with service governance in an enterprise.  When building out the domain, I found errors in the OSR server due to some conflicting classes from the OSB.  This wouldn't be an issue if each server was given a unique classpath setting with the node manager, but I was having the node manager use the standard startup scripts. The domain's bin/setDomainEnv.sh script has a large set of extra libraries added for OSB which look like this: if [ "${POST_CLASSPATH}" != "" ] ; then POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar${CLASSPATHSEP}${POST_CLASSPATH}" export POST_CLASSPATH else POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar" export POST_CLASSPATH fi if [ "${PRE_CLASSPATH}" != "" ] ; then PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar${CLASSPATHSEP}${PRE_CLASSPATH}" export PRE_CLASSPATH else PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar" export PRE_CLASSPATH fi POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}/oracle/fmwhome/Oracle_OSB1/soa/modules/oracle.soa.common.adapters_11.1.1/oracle.soa.common.adapters.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/version.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/alsb.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-ant.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-common.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-core.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-dameon.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/classes${CLASSPATHSEP}\ ${ALSB_HOME}/lib/external/log4j_1.2.8.jar${CLASSPATHSEP}\ ${DOMAIN_HOME}/config/osb" I didn't take the time to sort out exactly which jar was causing the problem, but I simply surrounded this block with a conditional statement: if [ "${SERVER_NAME}" == "osr_server1" ] ; then POST_CLASSPATH=""else if [ "${POST_CLASSPATH}" != "" ] ; then POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar${CLASSPATHSEP}${POST_CLASSPATH}" export POST_CLASSPATH else POST_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jrf_11.1.1/jrf.jar" export POST_CLASSPATH fi if [ "${PRE_CLASSPATH}" != "" ] ; then PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar${CLASSPATHSEP}${PRE_CLASSPATH}" export PRE_CLASSPATH else PRE_CLASSPATH="${COMMON_COMPONENTS_HOME}/modules/oracle.jdbc_11.1.1/ojdbc6dms.jar" export PRE_CLASSPATH fi POST_CLASSPATH="${POST_CLASSPATH}${CLASSPATHSEP}/oracle/fmwhome/Oracle_OSB1/soa/modules/oracle.soa.common.adapters_11.1.1/oracle.soa.common.adapters.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/version.jar\ ${CLASSPATHSEP}${ALSB_HOME}/lib/alsb.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-ant.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-common.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-core.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/lib/j2ssh-dameon.jar\ ${CLASSPATHSEP}${ALSB_HOME}/3rdparty/classes${CLASSPATHSEP}\ ${ALSB_HOME}/lib/external/log4j_1.2.8.jar${CLASSPATHSEP}\ ${DOMAIN_HOME}/config/osb" fi I could have also just done an if [ ${SERVER_NAME} = "osb_server1" ], but I would have also had to include the AdminServer because they are needed there too.  Since the oer_server1 didn't mind, I did the negative case as shown above. To help others find this post, I'm including the error that was reported in the OSR server before I made this change. ####<Mar 30, 2012 4:20:28 PM EST> <Error> <HTTP> <localhost.localdomain> <osr_server1> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <11d1def534ea1be0:30e96542:13662023753:-8000-000000000000001c> <1333084828916> <BEA-101017> <[ServletContext@470316600[app:registry module:registry.war path:/registry spec-version:null]] Root cause of ServletException. java.lang.LinkageError: Class javax/xml/namespace/QName violates loader constraints at com.idoox.wsdl.extensions.PopulatedExtensionRegistry.<init>(PopulatedExtensionRegistry.java:84) at com.idoox.wsdl.factory.WSDLFactoryImpl.newDefinition(WSDLFactoryImpl.java:61) at com.idoox.wsdl.xml.WSDLReaderImpl.parseDefinitions(WSDLReaderImpl.java:419) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:309) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:272) at com.idoox.wsdl.xml.WSDLReaderImpl.readWSDL(WSDLReaderImpl.java:198) at com.idoox.wsdl.util.WSDLUtil.readWSDL(WSDLUtil.java:126) at com.systinet.wasp.admin.PackageRepositoryImpl.validateServicesNamespaceAndName(PackageRepositoryImpl.java:885) at com.systinet.wasp.admin.PackageRepositoryImpl.registerPackage(PackageRepositoryImpl.java:807) at com.systinet.wasp.admin.PackageRepositoryImpl.updateDir(PackageRepositoryImpl.java:611) at com.systinet.wasp.admin.PackageRepositoryImpl.updateDir(PackageRepositoryImpl.java:643) at com.systinet.wasp.admin.PackageRepositoryImpl.update(PackageRepositoryImpl.java:553) at com.systinet.wasp.admin.PackageRepositoryImpl.init(PackageRepositoryImpl.java:242) at com.idoox.wasp.ModuleRepository.loadModules(ModuleRepository.java:198) at com.systinet.wasp.WaspImpl.boot(WaspImpl.java:383) at org.systinet.wasp.Wasp.init(Wasp.java:151) at com.systinet.transport.servlet.server.Servlet.init(Unknown Source) at weblogic.servlet.internal.StubSecurityHelper$ServletInitAction.run(StubSecurityHelper.java:283) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.StubSecurityHelper.createServlet(StubSecurityHelper.java:64) at weblogic.servlet.internal.StubLifecycleHelper.createOneInstance(StubLifecycleHelper.java:58) at weblogic.servlet.internal.StubLifecycleHelper.<init>(StubLifecycleHelper.java:48) at weblogic.servlet.internal.ServletStubImpl.prepareServlet(ServletStubImpl.java:539) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:244) at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:184) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3732) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3696) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2273) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2179) at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1490) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256) at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >