Search Results

Search found 52182 results on 2088 pages for 'something for the weekend'.

Page 31/2088 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Looking for something to add some standard rules for my c++ project.

    - by rkb
    Hello all, My team is developing a C++ project on linux. We use vim as editor. I want to enforce some code standard rules in our team in such a way that if the code is not in accordance with it, some sort of warning or error will be thrown when it builds or compiles. Not necessarily it builds but at least I can run some plugin or tools on that code to make sure it meets the standard. So that before committing to svn everyone need to run the code through some sort of plugin or script and make sure it meets the requirement and then only he/she can commit. Not sure if we can add some rules to vim, if there are any let me know about it. For eg. In our code standards all the member variables and private functions should start with _ class A{ private: int _count; float _amount; void _increment_count(){ ++_count; } } So I want to throw some warning or error or some sort of messages for this class if the variables are declared as follows. class A{ private: int count; float amount; void increment_count(){ ++_count; } } Please note that warning and error are not from compiler becoz program is still valid. Its from the tool I want to use so that code goes to re-factoring but still works fine on the executable side. I am looking for some sort of plugin or pre parsers or scripts which will help me in achieving all this. Currently we use svn; just to anser the comment.

    Read the article

  • XSLT - Adding a class to something with a class?

    - by Probocop
    When using XSLT how do I apply a class to an element which already has a class? The way I'm doing it it replaces the class that is already present? How would I add the class in addition to the existing class? My code is as follows: <xsl:if test="data[@alias = 'off'] = 1"> <xsl:attribute name="class">off</xsl:attribute> </xsl:if> <xsl:if test="$currentPage/ancestor-or-self::node/@id = current()/@id"> <xsl:attribute name="class">active</xsl:attribute> </xsl:if> Thanks.

    Read the article

  • Create ordering in a MySQL table without using a number (because then it's hard to put something in

    - by user347256
    I have a long list of items (say, a few million items) in a mysql table, let's call it mytable and it has the field mytable.itemid. The items are given an order, and can be re=ordered by the user by drag and drop. If I add a field called mytable.order and just put numbers in them, it creates problems: what if I want to move an item between 2 other items? Then all the order fields have to be updated? That seems like a nightmare. Is there a (scalable) way to add order to a table that is different from just giving every item a number, order by that, and do loads of SQL queries everytime the order is changed?

    Read the article

  • How to get VS or Xcode warning with something like "x = x++"?

    - by Jim Buck
    In the spirit of undefined behavior associated with sequence points such as “x = ++x” is it really undefined?, how does one get the compiler to complain about such code? Specifically, I am using Visual Studio 2010 and Xcode 4.3.1, the latter for an OSX app, and neither warned me about this. I even cranked up the warnings on VS2010 to "all", and it happily compiled this. (For the record, VS2010's version added 1 to the variable where Xcode's version kept the variable unchanged.)

    Read the article

  • Do something when phone reaches a set of location?

    - by Pentium10
    I have a list of coordinates in the database identified as POI. For a city could be 100 records. I would like to get notified when the phone gets in 150 meters range of one of the location. The location coordinates too has an error/radius, usually 10 to 100meters. Since I don't find it good to add each location(could be hundreds) for a trigger, how can I optimize the wake-up code? Also do I have options to remove a previously setup notification from the queue?

    Read the article

  • C#: Using as bool? instead of object something = ViewState["hi"]

    - by Programmin Tool
    So I'm going through old code (2.0) and I came across this: object isReviewingValue = ViewState["IsReviewing"]; if (isReviewingValue is bool) { return (bool)isReviewingValue; } My first thought was to us the "as" keyword to avoid the unneeded (bool)isReviewingValue; But "as" only works with non value types. No problem, I just went ahead and did this: bool? isReviewingValue= ViewState["IsReviewing"] as bool?; if (isReviewingValue.HasValue) { return isReviewingValue.Value; } Question is: Besides looking a bit more readable, is this in fact better?

    Read the article

  • Is it possible in Java to implement something similar to Object.clone() ?

    - by devoured elysium
    The Object.clone() method in Java is pretty special, as instead of returning a copy of the object that is to be cloned with the Object type, it returns the correct Object type. This can be better described with the following code: class A extends Object { public A clone() { return super.clone(); //although Object.clone() is from the Object class //this super.clone() will return an A object! } } class B extends A { } public static void main(String[] args) { B = new B(); B.clone();//here, although we haven't defined a clone() method, the cloned //object return is of type B! } So, could anyone explain if possible if is there anyway to replicate what happens inside Object.clone()'s method?

    Read the article

  • An XEvent a Day (13 of 31) – The system_health Session

    - by Jonathan Kehayias
    Today’s post was originally planned for this coming weekend, but seems I’ve caught whatever bug my kids had over the weekend so I am changing up today’s blog post with one that is easier to cover and shorter.  If you’ve been running some of the queries from the posts in this series, you have no doubt come across an Event Session running on your server with the name of system_health.  In today’s post I’ll go over this session and provide links to references related to it. When Extended Events...(read more)

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • Anyone know about AppDynamics? Does Microsoft already have something similiar? Event viewer? Im confused

    - by punkouter
    We have a big collection of java (80%) and .Net (20%) apps.. And the people in charge want a way to monitor the applications somehow. So they told me they are interested in AppDynamics? I have spent a bit looking into it but I still don't get it. Can someone tell me what its about ? From my understanding it seems like you would someone inject the AppDynamics API in your code to tell about differnt events that occur? So sounds like Event Viewer ? I am on the .NET side of this and seems to me we can just write data to a log and report off of that data.. I don't get what the additional complexity of using'AppDynamics' does. Maybe it is overkill ??

    Read the article

  • WSS and CAG , _layout pages break

    - by Mike
    Alright, I've searched everywhere and I cannot find the answer, due to the rarity of our setup. WSS 3.0/IIS 6.0/WinServer 2003 We have a sharepoint site that is in good shape, almost. Its TCP and SSL port are uncommon and need to be rerouted to work properly. This is where the Citrix Access Gateway (CAG) comes in play. It will redirect any request from URL (something.something.com) to the correct SSL port on the correct server. My AAM is configured to Default something.something.com and nothing else, since the CAG will provide the port. We use FBA, and require SSL. This works perfectly for everything that is safe or that is anything that an end user can see, but if I try to add a webpart, it errors out. Whereas if I add it internally, or bypass the CAG the webpart adds fine. The same goes for most of the _layouts pages, like _layouts/new.aspx. If I add a Link List/Doc library on the something.something.com, it errors out (Page cannot be displayed) and the page won't display, but if I try it with an internal address it will work fine. I found that if I am trying to add something or doing anything administrative, the site will navigate to the pages that I need to go to fine, but when i actually ADD something the URL will change from something.something.com to something.something.com:SSLport, thus erroring out the site. The URL with the SSL port shows on the Site URL when navigating to Site Settings. However, if I bypass the CAG, using the internal address the _layouts page works like a charm and i can add anything. All the CAG does is reroute a DNS request to the provided server and port. I've tried reextending the application, no luck same thing. I've tried changing the AAM to hide the port and the CAG rejects it. I've tried to recreate a new webapp/site collection with the same rules on the CAG, same thing occurs. Correct me if I'm wrong, and please provide me with some feedback and answers. Any suggestions would be very appreciated. Is it the CAG or the Alternate Access Mappings (AAM)?

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • MySQL can only log in as root, even after creating new users with their own database

    - by ionFish
    Problem: I just set up a Debian Wheezy installation for testing, and installed the LAMP packages and PMA. I can log in as root with my pre-defined password, create/edit/delete both databases and users. The problem comes when I create a new user 'something', set a password for it, and grant it all privileges on a table 'something' (same as the username). Upon connecting, it denies access to the user. Details: Host: localhost using MySQL 5.5.24-8 Creating user: CREATE USER 'something'@'%' IDENTIFIED BY '***';GRANT USAGE ON *.* TO 'something'@'%' IDENTIFIED BY '***' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0;CREATE DATABASE IF NOT EXISTSsomething;GRANT ALL PRIVILEGES ONsomething.* TO 'something'@'%'; Checking privileges: GRANT USAGE ON *.* TO 'something'@'%' IDENTIFIED BY PASSWORD '*92F9DAF5F5129554509489FDB6A433510223C799'; Result: Access denied for user 'something'@'localhost' (using password: YES) More Info: I use this same exact procedure for the Squeeze distribution, and it works perfectly. Is there a chance it's because of Wheezy, or something else? I need to continue using Wheezy because of the updated packages (for this test server -- the others work fine), so 'just use Squeeze' is not an option. Note: I HAVE tried flush privileges; to no avail.

    Read the article

  • nginx: how do I track down a random 500 from nginx (not my application). Potentially has something to do with load?

    - by kaleidomedallion
    We recently had some 500's from nginx itself that somehow were not logged (we have screenshots, but nothing in the logs). That is weird in itself, because usually errors show up there. Regardless, I am wondering if there is something like a connection pool size that if maxed out would result in a 500? We have correlated it potentially to a recent spike in traffic, but it is not conclusive. Anyone have any ideas of how to begin to approach such an issue?

    Read the article

  • Delete temp file during finally vs delete output file during catch

    - by Russell
    This is in Java 6. I've seen more than once that people create temp files, do something, then rename it to the output file. Everything is wrapped in a try-finally block, where the temp file is deleted in finally in case something goes wrong in between. try { //do something with tempFile //do something with tempFile //do something with tempFile tempFile.renameTo(outputFile); } finally { if (tempFile.exists()) tempFile.delete() } I was wondering what are the benefits of doing that instead of doing something to the output file directly and delete it in case of exceptions. try { //do something with outputFile //do something with outputFile //do something with outputFile } catch (Exception e) { if (outputFile.exists()) outputFile.delete(); } My guess is that deleting temp files in finally benefits me when the try block can throw many kinds of exceptions. Is my guess right? What else?

    Read the article

  • Reading XML outer tables

    - by Sathish
    I have an Xml file as shown below in which if i convert this to Dataset, the dataset will contain 3 tables with talble names Table Name1, Table Name2 and Table Name3 but i want to get this information without converting this to dataset Basically i want to get all the outer table names out of my excel. Please help me with the piece of code <Main Table> <Table Name1> <Something> <Something> <Something> <Something> </Table Name1> <Table Name2> <Something> <Something> <Something> <Something> </Table Name2> <Table Name3> <Something> <Something> <Something> </Table Name3> </Main Table>

    Read the article

  • Is there a way to pass another parameter in the preg_replace_callback callback function?

    - by DaNieL
    mmmh guys, i really hope my english is good enaught to explain what i need. Lets take this example (that is just an example!) of code: class Something(){ public function Lower($string){ return strtolower($string); } } class Foo{ public $something; public $reg; public $string; public function __construct($reg, $string, $something){ $this->something = $something; $this->reg = $reg; $this->string = $string; } public function Replace(){ return preg_replace_callback($this->reg, 'Foo::Bar', $this->string); } public static function Bar($matches){ /* * [...] * do something with $matches and create the $output variable * [...] */ /* * I know is really useless in this example, but i need to have an istance to an object here * (in this example, the Something object, but can be something else!) */ return $this->something->Lower($output); } } $s = new Something(); $foo = new Foo($myregexp, $mystring, $s); $content = $foo->Replace(); So, the php manual say that to use a class method as callback in preg_replace_callback(), the method must be abstract. I need to pass an instance of a previuosly initialized object (in the example, an instance of the Something class) at the callback function. I tryed to use call_user_func(), but doesnt work (becose in this way i miss the matches parameter). Is there a way to do that, or have i to separate the process (doing before preg_match_all, for each match retrieve the replace value, and then a simple preg_replace)?

    Read the article

  • [C++] Multiple inheritance from template class

    - by Tom P.
    Hello, I'm having issues with multiple inheritance from different instantiations of the same template class. Specifically, I'm trying to do this: template <class T> class Base { public: Base() : obj(NULL) { } virtual ~Base() { if( obj != NULL ) delete obj; } template <class T> T* createBase() { obj = new T(); return obj; } protected: T* obj; }; class Something { // ... }; class SomethingElse { // ... }; class Derived : public Base<Something>, public Base<SomethingElse> { }; int main() { Derived* d = new Derived(); Something* smth1 = d->createBase<Something>(); SomethingElse* smth2 = d->createBase<SomethingElse>(); delete d; return 0; } When I try to compile the above code, I get the following errors: 1>[...](41) : error C2440: '=' : cannot convert from 'SomethingElse *' to 'Something *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1> [...](71) : see reference to function template instantiation 'T *Base<Something>::createBase<SomethingElse>(void)' being compiled 1> with 1> [ 1> T=SomethingElse 1> ] 1>[...](43) : error C2440: 'return' : cannot convert from 'Something *' to 'SomethingElse *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast The issue seems to be ambiguity due to member obj being inherited from both Base< Something and Base< SomethingElse , and I can work around it by disambiguating my calls to createBase: Something* smth1 = d->Base<Something>::createBase<Something>(); SomethingElse* smth2 = d->Base<SomethingElse>::createBase<SomethingElse>(); However, this solution is dreadfully impractical, syntactically speaking, and I'd prefer something more elegant. Moreover, I'm puzzled by the first error message. It seems to imply that there is an instantiation createBase< SomethingElse in Base< Something , but how is that even possible? Any information or advice regarding this issue would be much appreciated.

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • Got that Friday feeling?

    - by Rebecca Amos
    Saturday is just around the corner, and we’re all starting to wrap up for the weekend. If you’re the DBA that ‘Friday feeling’ might be as much about checking and preparing your SQL Servers for the next two days, as about looking forward to spending time with friends and family. Whether you’re double-checking your disaster recovery strategy, or know that it’s your turn to be on-call this weekend, it’s likely you’re preparing for the worst, just in case. The fact that you’re making these checks, and caring about both your servers and your users, means that you might be an exceptional DBA. You’re already putting in that extra effort to make other people’s lives easier. So why not take some time for your professional development and enter the Exceptional DBA Awards? If you’re looking for some inspiration for your entry, download our Judges’ Top Tips poster for advice on what the judges are looking for from this year’s entrants. Not only will you be boosting your professional development, but you could win full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and a copy of Red Gate’s SQL DBA Bundle. So take some time out for yourself this weekend and get started on your entry: www.exceptionaldba.com

    Read the article

  • Installing ImageMagick on Mac OSX 10.6

    - by Russell C.
    I just got a new Mac and am trying to setup a local Perl development environment. I'm using MAMP but also need the ImageMagick perl module installed in order to do some of the photo processing our scripts require. I tried installing ImageMagick manually but ran into some issues and after reading online a lot of people reported having issues going this route. The general consensus was to install it using MacPorts instead so I went ahead and installed MacPorts. Unfortunately, MacPorts can't seem to install it successfully either. Here is the command I'm using to try to install ImageMagick: sudo port install p5-perlmagick And here are all the errors reported during install: ---> Computing dependencies for p5-perlmagick ---> Building p5-perlmagick Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-perlmagick/work/PerlMagick-6.32" && /usr/bin/make -j2 all " returned error 2 Command output: Magick.xs:10918: error: 'struct Methods' has no member named 'exception' Magick.xs:10918: error: request for member 'severity' in something not a structure or union Magick.xs:10918: error: 'ErrorException' undeclared (first use in this function) Magick.xs:10919: error: 'struct Methods' has no member named 'exception' Magick.xs:10920: warning: implicit declaration of function 'GetImageException' Magick.xs:10922: error: 'struct PackageInfo' has no member named 'image_info' Magick.xs:10922: error: 'struct Methods' has no member named 'adjoin' Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: 'UndefinedException' undeclared (first use in this function) Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: passing argument 2 of 'Perl_sv_catpv' from incompatible pointer type Magick.xs:10929: warning: unused variable 'message' Magick.xs:10856: warning: unused variable 'filename' Magick.c:10784: warning: unused variable 'ref' Magick.c:10777: warning: unused variable 'ix' Magick.xs: In function 'boot_Image__Magick': Magick.xs:2122: warning: implicit declaration of function 'InitializeMagick' Magick.xs:2123: warning: implicit declaration of function 'SetWarningHandler' Magick.xs:2124: warning: implicit declaration of function 'SetErrorHandler' make: *** [Magick.o] Error 1 Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. I have no idea what the problem might be or how to go about successfully installing ImageMagick. I'd appreciate any help & advice that someone out there that has done this successfully might be able to provide. Thanks in advance!

    Read the article

  • R glm standard error estimate differences to SAS PROC GENMOD

    - by Michelle
    I am converting a SAS PROC GENMOD example into R, using glm in R. The SAS code was: proc genmod data=data0 namelen=30; model boxcoxy=boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ/dist=normal; FREQ REPLICATE_VAR; run; My R code is: parmsg2 <- glm(boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ , data=data0, family=gaussian, weights = REPLICATE_VAR) When I use summary(parmsg2) I get the same coefficient estimates as in SAS, but my standard errors are wildly different. The summary output from SAS is: Name df Estimate StdErr LowerWaldCL UpperWaldCL ChiSq ProbChiSq Intercept 1 6.5007436 .00078884 6.4991975 6.5022897 67911982 0 agegrp4 1 .64607262 .00105425 .64400633 .64813891 375556.79 0 agegrp5 1 .4191395 .00089722 .41738099 .42089802 218233.76 0 agegrp6 1 -.22518765 .00083118 -.22681672 -.22355857 73401.113 0 agegrp7 1 -1.7445189 .00087569 -1.7462352 -1.7428026 3968762.2 0 agegrp8 1 -2.2908855 .00109766 -2.2930369 -2.2887342 4355849.4 0 race1 1 -.13454883 .00080672 -.13612997 -.13296769 27817.29 0 race3 1 -.20607036 .00070966 -.20746127 -.20467944 84319.131 0 weekend 1 .0327884 .00044731 .0319117 .03366511 5373.1931 0 seq2 1 -.47509583 .00047337 -.47602363 -.47416804 1007291.3 0 Scale 1 2.9328613 .00015586 2.9325559 2.9331668 -127 The summary output from R is: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.50074 0.10354 62.785 < 2e-16 AGEGRP4 0.64607 0.13838 4.669 3.07e-06 AGEGRP5 0.41914 0.11776 3.559 0.000374 AGEGRP6 -0.22519 0.10910 -2.064 0.039031 AGEGRP7 -1.74452 0.11494 -15.178 < 2e-16 AGEGRP8 -2.29089 0.14407 -15.901 < 2e-16 RACE1 -0.13455 0.10589 -1.271 0.203865 RACE3 -0.20607 0.09315 -2.212 0.026967 WEEKEND 0.03279 0.05871 0.558 0.576535 SEQ -0.47510 0.06213 -7.646 2.25e-14 The importance of the difference in the standard errors is that the SAS coefficients are all statistically significant, but the RACE1 and WEEKEND coefficients in the R output are not. I have found a formula to calculate the Wald confidence intervals in R, but this is pointless given the difference in the standard errors, as I will not get the same results. Apparently SAS uses a ridge-stabilized Newton-Raphson algorithm for its estimates, which are ML. The information I read about the glm function in R is that the results should be equivalent to ML. What can I do to change my estimation procedure in R so that I get the equivalent coefficents and standard error estimates that were produced in SAS? To update, thanks to Spacedman's answer, I used weights because the data are from individuals in a dietary survey, and REPLICATE_VAR is a balanced repeated replication weight, that is an integer (and quite large, in the order of 1000s or 10000s). The website that describes the weight is here. I don't know why the FREQ rather than the WEIGHT command was used in SAS. I will now test by expanding the number of observations using REPLICATE_VAR and rerunning the analysis.

    Read the article

  • Spending the summer at camp… Web Camp, that is

    - by Jon Galloway
    Microsoft is sponsoring a series of Web Camps this summer. They’re a series of free two day events being held worldwide, and I’m really excited about being taking part. The camp is targeted at a broad range of developer background and experience. Content builds from 101 level introductory material to 200-300 level coverage, but we hit some advanced bits (e.g. MVC 2 features, jQuery templating, IIS 7 features, etc.) that advanced developers may not yet have seen. We start with a lap around ASP.NET & Web Forms, then move on to building and application with ASP.NET MVC 2, jQuery, and Entity Framework 4, and finally deploy to IIS. I got to spend some time working with James before the first Web Camp refining the content, and I think he’s packed about as much goodness into the time available as is scientifically possible. The content is really code focused – we start with File/New Project and spend the day building a real, working application. The second day of the Web Camp provides attendees an opportunity to get hands on. There are two options: Join a team and build an application of your choice Work on a lab or tutorial James Senior and I kicked off the fun with the first Web Camp in Toronto a few weeks ago. It was sold out, lots of fun, and by all accounts a great way to spend two days. I’m really enthusiastic about the format. Rather than just listening to speakers and then forgetting everything in a few days, attendees actually build something of their choice. They get an opportunity to pitch projects they’re interested in, form teams, and build it – getting experience with “real world” problems, with all the help they need from experienced developers. James got help on the second day practical part from the good folks that run Startup Weekend. Startup Weekend is a fantastic program that gathers developers together to build cool apps in a weekend, so their input on how to organize successful teams for weekend projects was invaluable. Nick Seguin joined us in Toronto, and in addition to making sure that everything flowed smoothly, he just added a lot of fun and excitement to the event, reminding us all about how much fun it is to come up with a cool idea and just build it. In addition to the Toronto camp, I’ll be at the Mountain View, London, Munich, and New York camps over the next month. London is sold out, but the rest still have space available, so come join us! Here’s the full list, with the ones I’ll be at bolded because - you know - it’s my blog. The the whole speaker list is great, including Scott Guthrie, Scott Hanselman, James Senior, Rachel Appel, Dan Wahlin, and Christian Wenz. Toronto May 7-8 (James Senior and I were thrown out on our collective ears) Moscow May 19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 (I’m speaking with Rachel Appel) Sydney May 28-29 Singapore June 04-05 London June 04-05 (I’m speaking with Christian Wenz – SOLD OUT) Munich June 07-08 (I’m speaking with Christian Wenz) Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 (I’m speaking with Dan Wahlin) Come say hi!

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >