Search Results

Search found 10657 results on 427 pages for 'group'.

Page 117/427 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Code Camp 2013 Harrisburg PA

    - by raysmithequip
    Originally posted on: http://geekswithblogs.net/raysmithequip/archive/2013/10/15/154349.aspxThe Centrral Pensylvania Dot Net Users Group will be hosting a code camp nov 2 2013.  The Schedule is already on our groups' webpage, http://centralpenn.web121.discountasp.net/home/CodeCamp2013/tabid/109/Default.aspxYou will find the schedule on the pull down tab.  Registration is free, you will have to use Meetup to register.  http://www.meetup.com/Central-Penn-Dot-Net-User-Group/events/141788672/Sign in to Meetup and register to attend Code Camp!! Learning will be plentiful, the giveaways will be COOL!! So you gotta be there!!!In a couple of days I will post the schedule here in an effort to spread the word. ray smith n3twu

    Read the article

  • Efficient Bus Loading

    - by System Down
    This is something I did for a bus travel company a long time ago, and I was never happy with the results. I was thinking about that old project recently and thought I'd revisit that problem. Problem: Bus travel company has several buses with different passenger capacities (e.g. 15 50-passenger buses, 25 30-passenger buses ... etc). They specialized in offering transportation to very large groups (300+ passengers per group). Since each group needs to travel together they needed to manage their fleet efficiently to reduce waste. For instance, 88 passengers are better served by three 30-passenger buses (2 empty seats) than by two 50-passenger buses (12 empty seats). Another example, 75 passengers would be better served by one 50-passenger bus and one 30-passenger bus, a mix of types. What's a good algorithm to do this?

    Read the article

  • HTTP resource bundling/streaming practice

    - by icelava
    Our SPA (plain HTML and Javascript) makes use of huge volume of javascript and other resources that are downloaded via XHR. Given the sheer number of components and browser simultaneous request limits, we're thinking for ways to deliver our resources in a more efficient manner. A method we're considering is bundling several resources that logically form a coherent group into a single file; thus reducing down to only one XHR (per group). Furthermore to make it more responsive, we'd like to constantly inspect the partial responseText during the LOADING state, determining if a usable chunk (atomic resource) has already been downloaded, and make it available for deserialization/processing even before the XHR is DONE. (a stream-like experience) We're thinking surely somebody else would've considered roughly the same approach before, but haven't really come across any library/framework or container file format that is suitable for our scenario. Anybody else know of something similar?

    Read the article

  • Multi Level Security via Roles

    - by Geertjan
    I'm simulating a small scenario: Users can be dragged into roles; roles can be dragged into role groups. When a drop is made into a role group, a new role is created (WindowManager.getDefault().setRole("")). Then, when the user logs in, they log into a particular role. Depending on the role they log into, a different role group is assigned, which maps to a certain "role" in NetBeans Platform terms, i.e., the related level of security is applied and the related windows open.

    Read the article

  • Global Adopt a JSR Program Update

    - by heathervc
    The Global Adopt a JSR program, combining efforts of SouJava and London Java Community,  is an excellent place to get some Java User Group (JUG) resources for JSRs.  It also has the potential to act as an extra set of eyes, ears and volunteers for JSRs. The Global project to go to is at: http://adoptajsr.java.net.  The wiki page explaining the whole program and benefits to Spec Leads and EG's can be found there, including: The mailing list: [email protected] . Portugese speakers-mainly Brazlian JUG members-have their own mailing list and more language lists may be added as required. The IRC channel is at adoptajsr on irc.freenode.net Also check out this InfoQ article with Martijn Verburg about the London Java user group, the Adopt a JSR program, the JCP and Oracle’s handling of the Java community.

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • can't chmod on external hard disk?

    - by G. He
    I have an USB3.0 external hard disk, partitioned to 3 NTFS partitions. When I plug the hard disk in, the 3 partitions automatically mounted under /media. So far so good. I can read and write to files, or mkdir, etc on these partitions. But I can't do chmod/chown on any of the files/directories on these partitions. The owner:group always myself, and the mode are always 700 for directories and 600 for files. I have another partition on internal harddisk also mounted. That partition works fine. I looked the output of mount command, the only difference between mount options is that there is one extra 'default_permissions' on the external hard disks. Anyway I can set the owner:group and mode on these files and directories.

    Read the article

  • Lock file/content while being edited in browser. [migrated]

    - by codescope
    In one of my projects users are allowed to edit the same file. It is group work and max number of users in group is 4. It is rare that they will be editing at the same time but there is possibility of it. I am using ckeditor which displays the content. how I can lock the content while it is being edited? For the above case what will happen if one users open the content for editing and then never saves and leave window open. Is it possible to save the content, release the lock for editing by another users? If first user comes back to desk they should see the message that "content has been changed, please refresh". I am using php, mysql. Thanks

    Read the article

  • ?Oracle Database 12c????ASM Scrubbing Disk Groups

    - by Liu Maclean(???)
    ?12.1?Oracle ASM??????????????????? ??Scrubbing Disk Groups, Disk Scrubbing???????????,?????Normal ??High Redundancy?disk group?????? Scrubbing ?????????????????Disk Scrubbing???disk group rebalancing???????I/O?????Disk Scrubbing??????I/O????? ?????????Scrubbing????,?????,????????????,?????ALTER DISKGROUP?????????: SQL> ALTER DISKGROUP data SCRUB POWER LOW; SQL> ALTER DISKGROUP data SCRUB FILE '+DATA/ORCL/ASKMACLEAN/example.266.806582193' REPAIR POWER HIGH FORCE; SQL> ALTER DISKGROUP data SCRUB DISK DATA_0005 REPAIR POWER HIGH FORCE; ?????SCRUB ?: ??REPAIR??????????,?????REPAIR,?SCRUB???????????????? ??POWER?????AUTO LOW HIGH ??MAX? ?POWER???,???AUTO????? ??WAIT ???????scrubbing ?????????WAIT???,?scrubbing??????scrubbing queue ??,??????? ?FORCE?????,?????I/O????????????????scrubbing ,????????

    Read the article

  • IOUG Enterprise Manager SIG Webinar: WEBINAR: Performance Tuning your Database Cloud in Oracle Enterprise Manager 12c Cloud Control - 360 Degrees

    - by Patrick Rood
    October 25, 2013 EM 12c Sales Blast | IOUG Enterprise Manager SIG WEBINAR: Performance Tuning your Database Cloud in Oracle Enterprise Manager 12c Cloud Control - 360 Degrees Last year, the Independent Oracle User Group (IOUG) established a fast-growing Special Interest Group (SIG) devoted to Enterprise Manager, and has sponsored Quarterly Newsletters and Webinars about EM. To drive more interest in EM and the SIG, IOUG would like Oracle to invite customers to its latest techcast. Your customers will learn how to leverage Oracle Enterprise Manager 12c for tuning, trouble-shooting and monitoring their Oracle Database Cloud Ecosystem. The session covers lessons learned, tips/tricks, recommendations, best practices, "gotchas" and a whole lot more on how to effectively use Oracle Enterprise Manager 12c Cloud Control for quick, easy and intuitive performance tuning of an Oracle Database Cloud. Session Objectives: • Leveraging Enterprise Manager 12c Cloud Control for Oracle Database Tuning/Monitoring • Limited Deep-Dive on Automatic Workload Repository (AWR) • Oracle Database Cloud Performance Tuning • Best Practices for Database Cloud Maintenance and Monitoring Featured Speaker: Tariq Farooq, CEO, BrainSurface and Mike Ault Date & Time: Wednesday, October 30 12:00 PM- 1:00 PM Central Time (USA) Register Here 

    Read the article

  • Is Multicast broken for Android 2.0.1 (currently on the DROID) or am I missing something?

    - by Gubatron
    This code works perfectly in Ubuntu, in Windows and MacOSX, it also works fine with a Nexus-One currently running firmware 2.1.1. I start sending and listening multicast datagrams, and all the computers and the Nexus-One will see each other perfectly. Then I run the same code on a Droid (Firmware 2.0.1), and everybody will get the packets sent by the Droid, but the droid will listen only to it's own packets. This is the run() method of a thread that's constantly listening on a Multicast group for incoming packets sent to that group. I'm running my tests on a local network where I have multicast support enabled in the router. My goal is to have devices meet each other as they come on line by broadcasting packages to a multicast group. public void run() { byte[] buffer = new byte[65535]; DatagramPacket dp = new DatagramPacket(buffer, buffer.length); try{ MulticastSocket ms = new MulticastSocket(_port); ms.setNetworkInterface(_ni); //non loopback network interface passed ms.joinGroup(_ia); //the multicast address, currently 224.0.1.16 Log.v(TAG,"Joined Group " + _ia); while (true) { ms.receive(dp); String s = new String(dp.getData(),0,dp.getLength()); Log.v(TAG,"Received Package on "+ _ni.getName() +": " + s); Message m = new Message(); Bundle b = new Bundle(); b.putString("event", "Listener ("+_ni.getName()+"): \"" + s + "\""); m.setData(b); dispatchMessage(m); //send to ui thread } } catch (SocketException se) { System.err.println(se); } catch (IOException ie) { System.err.println(ie); } } Over here, is the code that sends the Multicast Datagram out of every valid network interface available (that's not the loopback interface). public void sendPing() { MulticastSocket ms = null; try { ms = new MulticastSocket(_port); ms.setTimeToLive(TTL_GLOBAL); List<NetworkInterface> interfaces = getMulticastNonLoopbackNetworkInterfaces(); for (NetworkInterface iface : interfaces) { //skip loopback if (iface.getName().equals("lo")) continue; ms.setNetworkInterface(iface); _buffer = ("FW-"+ _name +" PING ("+iface.getName()+":"+iface.getInetAddresses().nextElement()+")").getBytes(); DatagramPacket dp = new DatagramPacket(_buffer, _buffer.length,_ia,_port); ms.send(dp); Log.v(TAG,"Announcer: Sent packet - " + new String(_buffer) + " from " + iface.getDisplayName()); } } catch (IOException e) { e.printStackTrace(); } catch (Exception e2) { e2.printStackTrace(); } } Update (April 2nd 2010) I found a way to have the Droid's network interface to communicate using Multicast! _wifiMulticastLock = ((WifiManager) context.getSystemService(Context.WIFI_SERVICE)).createMulticastLock("multicastLockNameHere"); _wifiMulticastLock.acquire(); Then when you're done... if (_wifiMulticastLock != null && _wifiMulticastLock.isHeld()) _wifiMulticastLock.release(); After I did this, the Droid started sending and receiving UDP Datagrams on a Multicast group. gubatron

    Read the article

  • SQL -- How to combine three SELECT statements with very tricky requirements

    - by Frederick
    I have a SQL query with three SELECT statements. A picture of the data tables generated by these three select statements is located at www.britestudent.com/pub/1.png. Each of the three data tables have identical columns. I want to combine these three tables into one table such that: (1) All rows in top table (Table1) are always included. (2) Rows in the middle table (Table2) are included only when the values in column1 (UserName) and column4 (CourseName) do not match with any row from Table1. Both columns need to match for the row in Table2 to not be included. (3) Rows in the bottom table (Table3) are included only when the value in column4 (CourseName) is not already in any row of the results from combining Table1 and Table2. I have had success in implementing (1) and (2) with an SQL query like this: SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( "SELECT statement 1" UNION "SELECT statement 2" ) dt_derivedTable_1 GROUP BY CourseName, UserName Where "SELECT statement 1" is the query that generates Table1 and "SELECT statement 2" is the query that generates Table2. A picture of the data table generated by this query is located at www.britestudent.com/pub/2.png. I can get away with using the MAX() function because values in the AmountUsed and AnsweredCorrectly columns in Table1 will always be larger than those in Table2 (and they are identical in the last three columns of both tables). What I fail at is implementing (3). Any suggestions on how to do this will be appreciated. It is tricky because the UserName values in Table3 are null, and because the CourseName values in the combined Table1 and Table2 results are not unique (but they are unique in Table3). After implementing (3), the final table should look like the table in picture 2.png with the addition of the last row from Table3 (the row with the CourseName value starting with "4. Klasse..." I have tried to implement (3) using another derived table using SELECT, MAX() and UNION, but I could not get it to work. Below is my full SQL query with the lines from this failed attempt to implement (3) commented out. Cheers, Frederick PS--I am new to this forum (and new to SQL as well), but I have had more of my previous problems answered by reading other people's posts on this forum than from reading any other forum or Web site. This forum is a great resources. -- SELECT DISTINCT MAX(UserName), MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse -- FROM ( SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( -- Table 1 - All UserAccount/Course combinations that have had quizzez. SELECT DISTINCT dbo.win_user.user_name AS UserName, cast(dbo.GetAmountUsed(dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.no_of_questionsets_in_course) as nvarchar(10)) AS AmountUsed, Isnull(cast(dbo.GetAnswerCorrectly(dbo.session_header.win_user_id, dbo.course.course_id, dbo.question_set.no_of_questions) as nvarchar(10)),0) AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.session_detail INNER JOIN dbo.session_header ON dbo.session_detail.session_header_id = dbo.session_header.session_header_id INNER JOIN dbo.win_user ON dbo.session_header.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.win_user_course ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.question_set ON dbo.session_header.question_set_id = dbo.question_set.question_set_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.session_detail.no_of_attempts = 1 OR dbo.session_detail.no_of_attempts IS NULL) AND (dbo.session_detail.is_correct = 1 OR dbo.session_detail.is_correct IS NULL) AND (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.question_set.no_of_questions, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.course_code UNION ALL -- Table 2 - All UserAccount/Course combinations that do or do not have quizzes but where the Course is selected for quizzes for that User Account. SELECT dbo.win_user.user_name AS UserName, -1 AS AmountUsed, -1 AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.win_user_course INNER JOIN dbo.win_user ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.course.course_id, dbo.course.course_code ) dt_derivedTable_1 GROUP BY CourseName, UserName -- UNION ALL -- Table 3 - All Courses. -- SELECT DISTINCT null AS UserName, -- -2 AS AmountUsed, -- -2 AS AnsweredCorrectly, -- dbo.course.course_name AS CourseName, -- dbo.course.course_code, -- dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, -- dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse -- FROM dbo.course -- WHERE is_active = 'True' -- ) dt_derivedTable_2 -- GROUP BY CourseName -- ORDER BY CourseName

    Read the article

  • How do I combine or merge grouped nodes?

    - by LOlliffe
    Using the XSL: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" version="2.0"> <xsl:output method="xml"/> <xsl:template match="/"> <records> <record> <!-- Group record by bigID, for further processing --> <xsl:for-each-group select="records/record" group-by="bigID"> <xsl:sort select="bigID"/> <xsl:for-each select="current-group()"> <!-- Create new combined record --> <bigID> <!-- <xsl:value-of select="."/> --> <xsl:for-each select="."> <xsl:value-of select="bigID"/> </xsl:for-each> </bigID> <text> <xsl:value-of select="text"/> </text> </xsl:for-each> </xsl:for-each-group> </record> </records> </xsl:template> I'm trying to change: <?xml version="1.0" encoding="UTF-8"?> <records> <record> <bigID>123</bigID> <text>Contains text for 123</text> <bigID>456</bigID> <text>Some 456 text</text> <bigID>123</bigID> <text>More 123 text</text> <bigID>123</bigID> <text>Yet more 123 text</text> </record> into: <?xml version="1.0" encoding="UTF-8"?> <records> <record> <bigID>123</bigID> <text>Contains text for 123</text> <text>More 123 text</text> <text>Yet more 123 text</text> </bigID> <bigID>456 <text>Some 456 text</text> </bigID> </record> Right now, I'm just listing the grouped <bigIDs, individually. I'm missing the step after grouping, where I combine the grouped <bigID nodes. My suspicion is that I need to use the "key" function somehow, but I'm not sure. Thanks for any help.

    Read the article

  • How do I convert Data::Dumper output back into a Perl data structure?

    - by newbee_me
    Hi all! I was wondering if you could shed some lights regarding the code I've been doing for a couple of days. I've been trying to convert a Perl-parsed hash back to XML using the XMLout() and XMLin() method and it has been quite successful with this format. #!/usr/bin/perl -w use strict; # use module use IO::File; use XML::Simple; use XML::Dumper; use Data::Dumper; my $dump = new XML::Dumper; my ( $data, $VAR1 ); Topology:$VAR1 = { 'device' => { 'FOC1047Z2SZ' => { 'ChassisID' => '2009-09', 'Error' => undef, 'Group' => { 'ID' => 'A1', 'Type' => 'Base' }, 'Model' => 'CATALYST', 'Name' => 'CISCO-SW1', 'Neighbor' => {}, 'ProbedIP' => 'TEST', 'isDerived' => 0 } }, 'issues' => [ 'TEST' ] }; # create object my $xml = new XML::Simple (NoAttr=>1, RootName=>'data', SuppressEmpty => 'true'); # convert Perl array ref into XML document $data = $xml->XMLout($VAR1); #reads an XML file my $X_out = $xml->XMLin($data); # access XML data print Dumper($data); print "STATUS: $X_out->{issues}\n"; print "CHASSIS ID: $X_out->{device}{ChassisID}\n"; print "GROUP ID: $X_out->{device}{Group}{ID}\n"; print "DEVICE NAME: $X_out->{device}{Name}\n"; print "DEVICE NAME: $X_out->{device}{name}\n"; print "ERROR: $X_out->{device}{error}\n"; I can access all the element in the XML with no problem. But when I try to create a file that will house the parsed hash, problem arises because I can't seem to access all the XML elements. I guess, I wasn't able to unparse the file with the following code. #!/usr/bin/perl -w use strict; #!/usr/bin/perl # use module use IO::File; use XML::Simple; use XML::Dumper; use Data::Dumper; my $dump = new XML::Dumper; my ( $data, $VAR1, $line_Holder ); #this is the file that contains the parsed hash my $saveOut = "C:/parsed_hash.txt"; my $result_Holder = IO::File->new($saveOut, 'r'); while ($line_Holder = $result_Holder->getline){ print $line_Holder; } # create object my $xml = new XML::Simple (NoAttr=>1, RootName=>'data', SuppressEmpty => 'true'); # convert Perl array ref into XML document $data = $xml->XMLout($line_Holder); #reads an XML file my $X_out = $xml->XMLin($data); # access XML data print Dumper($data); print "STATUS: $X_out->{issues}\n"; print "CHASSIS ID: $X_out->{device}{ChassisID}\n"; print "GROUP ID: $X_out->{device}{Group}{ID}\n"; print "DEVICE NAME: $X_out->{device}{Name}\n"; print "DEVICE NAME: $X_out->{device}{name}\n"; print "ERROR: $X_out->{device}{error}\n"; Do you have any idea how I could access the $VAR1 inside the text file? Regards, newbee_me

    Read the article

  • Getting the constructor of an Interface Type through reflection, is there a better approach than loo

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Is there a better way than looping through all of my assembly types for implementations of my interface? I don't care about rewriting a piece of my code, I want to do it right on the first place so that I won't need to come back again and again and again. EDIT #1 Following Sam's advice, I will for now go with the IName and Name convention. However, is it me or there's some way to improve my code? Thanks! =)

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • How do I combine grouped nodes?

    - by LOlliffe
    Using the XSL: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" version="2.0"> <xsl:output method="xml"/> <xsl:template match="/"> <records> <record> <!-- Group record by bigID, for further processing --> <xsl:for-each-group select="records/record" group-by="bigID"> <xsl:sort select="bigID"/> <xsl:for-each select="current-group()"> <!-- Create new combined record --> <bigID> <!-- <xsl:value-of select="."/> --> <xsl:for-each select="."> <xsl:value-of select="bigID"/> </xsl:for-each> </bigID> <text> <xsl:value-of select="text"/> </text> </xsl:for-each> </xsl:for-each-group> </record> </records> </xsl:template> I'm trying to change: <?xml version="1.0" encoding="UTF-8"?> <records> <record> <bigID>123</bigID> <text>Contains text for 123</text> <bigID>456</bigID> <text>Some 456 text</text> <bigID>123</bigID> <text>More 123 text</text> <bigID>123</bigID> <text>Yet more 123 text</text> </record> into: <?xml version="1.0" encoding="UTF-8"?> <records> <record> <bigID>123</bigID> <text>Contains text for 123</text> <text>More 123 text</text> <text>Yet more 123 text</text> </bigID> <bigID>456 <text>Some 456 text</text> </bigID> </record> Right now, I'm just listing the grouped <bigIDs, individually. I'm missing the step after grouping, where I combine the grouped <bigID nodes. My suspicion is that I need to use the "key" function somehow, but I'm not sure. Thanks for any help.

    Read the article

  • Getting the constructor of an Interface Type through reflection?

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Might somebody guide me to the right path to follow in this situation?

    Read the article

  • No Method Error in Ruby

    - by JayG
    Hi, I currently have a Rails Apps that lets users drag and drop certain elements of the webpage and updates the application based on the users choice. This is done with the help of the Rails helpers and AJAX. However I keep running into a "NoMethodError" in Ruby. NoMethodError in ProjectsController#member_change undefined method `symbolize_keys' for nil:NilClass Here is the method that is being called. My trace says that error is occurring in this line: before = u.functions_for(r.authorizable_id) u.roles << r unless u.roles.include? r u.save flag_changed = true after = u.functions_for(r.authorizable_id) And here is the function being called def member_change flag_changed = false params['u'] =~ /role_(\d+)_user_(\d+)/ drag_role_id = $1 user_id = $2 params['r'] =~ /role_(\d+)/ drop_role_id = $1 if u=User.find(user_id) if r=Role.find(drop_role_id) if drag_role_id.to_i !=0 and old_r=Role.find(drag_role_id) if drag_role_id == drop_role_id #fom A to A => nothing happen flash.now[:warning] = _('No Operation...') elsif r.authorizable_id == old_r.authorizable_id #the same project? old_r.users.delete(u) unless old_r.valid? flash.now[:warning] = _('Group "Admin" CAN NOT be EMPTY.') old_r.users << u #TODO: better recovery member_edit #if flag_changed render :action => :member_edit, :layout => 'module_with_flash' return end old_r.save r.users << u unless r.users.include? u r.save flag_changed = true before = u.functions_for(r.authorizable_id) after = u.functions_for(r.authorizable_id) added = after - before removed = before - after added.each do |f| ApplicationController::send_msg(:function,:create, {:function_name => f.name, :user_id => u.id, :project_id => r.authorizable_id }) end removed.each do |f| ApplicationController::send_msg(:function,:delete, {:function_name => f.name, :user_id => u.id, :project_id => r.authorizable_id }) end flash.now[:notice] = _( 'Move User to Group' ) + " #{ r.name }" else flash.now[:warning] = _('You can\'t move User between Groups that belong to different Projects.') end else before = u.functions_for(r.authorizable_id) u.roles << r unless u.roles.include? r u.save flag_changed = true after = u.functions_for(r.authorizable_id) added = after - before added.each do |f| ApplicationController::send_msg(:function,:create, {:function_name => f.name, :user_id => u.id, :project_id => r.authorizable_id }) end flash.now[:notice] = _( 'Add User into Group' ) + " #{ r.name }" end else flash.now[:warn] = _( 'Group doesn\'t exist!' ) + ": #{ r.name }" end else flash.now[:warning] = _( 'User doesn\'t exist!' ) + ": #{ u.login }" end member_edit #if flag_changed render :action => :member_edit, :layout => 'module_with_flash' end and the JavaScript used to call the function jQuery('#RemoveThisMember').droppable({accept:'.RolesUsersSelection', drop:function(ev,ui){ if (confirm("This will remove User from this Group, are you sure?")) {jQuery.ajax({data:'u=' + encodeURIComponent(jQuery(ui.draggable).attr('id')), success:function(request){jQuery('#module_content').html(request);}, type:'post', url:'/of/projects/11/member_delete'});} }, hoverClass:'ProjectRoleDropDelete_active'}) Any ideas? Thanks,

    Read the article

  • Using a mounted NTFS share with nginx

    - by Hoff
    I have set up a local testing VM with Ubuntu Server 12.04 LTS and the LEMP stack. It's kind of an unconventional setup because instead of having all my PHP scripts on the local machine, I've mounted an NTFS share as the document root because I do my development on Windows. I had everything working perfectly up until this morning, now I keep getting a dreaded 'File not found.' error. I am almost certain this must be somehow permission related, because if I copy my site over to /var/www, nginx and php-fpm have no problems serving my PHP scripts. What I can't figure out is why all of a sudden (after a reboot of the server), no PHP files will be served but instead just the 'File not found.' error. Static files work fine, so I think it's PHP that is causing the headache. Both nginx and php-fpm are configured to run as the user www-data: root@ubuntu-server:~# ps aux | grep 'nginx\|php-fpm' root 1095 0.0 0.0 5816 792 ? Ss 11:11 0:00 nginx: master process /opt/nginx/sbin/nginx -c /etc/nginx/nginx.conf www-data 1096 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process www-data 1098 0.0 0.1 6016 1172 ? S 11:11 0:00 nginx: worker process root 1130 0.0 0.4 175560 4212 ? Ss 11:11 0:00 php-fpm: master process (/etc/php5/php-fpm.conf) www-data 1131 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1132 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www www-data 1133 0.0 0.3 175560 3216 ? S 11:11 0:00 php-fpm: pool www root 1686 0.0 0.0 4368 816 pts/1 S+ 11:11 0:00 grep --color=auto nginx\|php-fpm I have mounted the NTFS share at /mnt/webfiles by editing /etc/fstab and adding the following line: //192.168.0.199/c$/Websites/ /mnt/webfiles cifs username=Jordan,password=mypasswordhere,gid=33,uid=33 0 0 Where gid 33 is the www-data group and uid 33 is the user www-data. If I list the contents of one of my sites you can in fact see that they belong to the user www-data: root@ubuntu-server:~# ls -l /mnt/webfiles/nTv5-2.0 total 8 drwxr-xr-x 0 www-data www-data 0 Jun 6 19:12 app drwxr-xr-x 0 www-data www-data 0 Aug 22 19:00 assets -rwxr-xr-x 0 www-data www-data 1150 Jan 4 2012 favicon.ico -rwxr-xr-x 0 www-data www-data 1412 Dec 28 2011 index.php drwxr-xr-x 0 www-data www-data 0 Jun 3 16:44 lib drwxr-xr-x 0 www-data www-data 0 Jan 3 2012 plugins drwxr-xr-x 0 www-data www-data 0 Jun 3 16:45 vendors If I switch to the www-data user, I have no problem creating a new file on the share: root@ubuntu-server:~# su www-data $ > /mnt/webfiles/test.txt $ ls -l /mnt/webfiles | grep test\.txt -rwxr-xr-x 0 www-data www-data 0 Sep 8 11:19 test.txt There should be no problem reading or writing to the share with php-fpm running as the user www-data. When I examine the error log of nginx, it's filled with a bunch of lines that look like the following: 2012/09/08 11:22:36 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" 2012/09/08 11:22:39 [error] 1096#0: *1 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.0.199, server: , request: "GET /apc.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.123" It's bizarre that this was working previously and now all of sudden PHP is complaining that it can't "find" the scripts on the share. Does anybody know why this is happening? EDIT I tried editing php-fpm.conf and changing chdir to the following: chdir = /mnt/webfiles When I try and restart the php-fpm service, I get the error: Starting php-fpm [08-Sep-2012 14:20:55] ERROR: [pool www] the chdir path '/mnt/webfiles' does not exist or is not a directory This is a total load of bullshit because this directory DOES exist and is mounted! Any ls commands to list that directory work perfectly. Why the hell can't PHP-FPM see this directory?! Here are my configuration files for reference: nginx.conf user www-data; worker_processes 2; error_log /var/log/nginx/nginx.log info; pid /var/run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { include fastcgi.conf; include mime.types; default_type application/octet-stream; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; ## Proxy proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 32m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; ## Compression gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ### TCP options tcp_nodelay on; tcp_nopush on; keepalive_timeout 65; sendfile on; include /etc/nginx/sites-enabled/*; } my site config server { listen 80; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /mnt/webfiles/nTv5-2.0/app/webroot; index index.php; ## Block bad bots if ($http_user_agent ~* (HTTrack|HTMLParser|libcurl|discobot|Exabot|Casper|kmccrew|plaNETWORK|RPT-HTTPClient)) { return 444; } ## Block certain Referers (case insensitive) if ($http_referer ~* (sex|vigra|viagra) ) { return 444; } ## Deny dot files: location ~ /\. { deny all; } ## Favicon Not Found location = /favicon.ico { access_log off; log_not_found off; } ## Robots.txt Not Found location = /robots.txt { access_log off; log_not_found off; } if (-f $document_root/maintenance.html) { rewrite ^(.*)$ /maintenance.html last; } location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "max-age=2678400, public, must-revalidate"; } location / { try_files $uri $uri/ index.php; if (-f $request_filename) { break; } rewrite ^(.+)$ /index.php?url=$1 last; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } } php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix (/opt/php5). This prefix can be dynamicaly changed by using the ; '-p' argument from the command line. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. ; Relative path can also be used. They will be prefixed by: ; - the global prefix if it's been set (-p arguement) ; - /opt/php5 otherwise ;include=etc/fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Note: the default prefix is /opt/php5/var ; Default Value: none pid = /var/run/php-fpm.pid ; Error log file ; Note: the default prefix is /opt/php5/var ; Default Value: log/php-fpm.log error_log = /var/log/php5-fpm/php-fpm.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes ;daemonize = yes ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; Multiple pools of child processes may be started with different listening ; ports and different management options. The name of the pool will be ; used in logs and stats. There is no limitation on the number of pools which ; FPM can handle. Your system will tell you anyway :) ; Start a new pool named 'www'. ; the variable $pool can we used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /opt/php5) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses on a ; specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. ;listen = 127.0.0.1:9000 listen = /var/run/php5-fpm.sock ; Set listen(2) backlog. A value of '-1' means unlimited. ; Default Value: 128 (-1 on FreeBSD and OpenBSD) ;listen.backlog = -1 ; List of ipv4 addresses of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. ; Default Values: user and group are set as the running user ; mode is set to 0666 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 ; Unix user/group of processes ; Note: The user is mandatory. If the group is not set, the default user's group ; will be used. user = www-data group = www-data ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives: ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes to be created when pm is set to 'dynamic'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. ; Note: Used when pm is set to either 'static' or 'dynamic' ; Note: This value is mandatory. pm.max_children = 50 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2 pm.start_servers = 20 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 5 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 35 ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. By default, the status page shows the following ; information: ; accepted conn - the number of request accepted by the pool; ; pool - the name of the pool; ; process manager - static or dynamic; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes. ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic') ; The values of 'idle processes', 'active processes' and 'total processes' are ; updated each second. The value of 'accepted conn' is updated in real time. ; Example output: ; accepted conn: 12073 ; pool: www ; process manager: static ; idle processes: 35 ; active processes: 65 ; total processes: 100 ; max children reached: 1 ; By default the status page output is formatted as text/plain. Passing either ; 'html' or 'json' as a query string will return the corresponding output ; syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set pm.status_path = /status ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ping.response = pong ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environement, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /opt/php5) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i

    Read the article

  • Rspec2, Rails3, Authlogic: Can't run specs

    - by Sam
    When I do rspec spec in my rails project, I get No examples were matched. Perhaps {:if=>#<Proc:0x0000010126e998@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:50 (lambda)>, :unless=>#<Proc:0x0000010126e970@/Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:51 (lambda)>} is excluding everything? Finished in 0.00004 seconds 0 examples, 0 failures Now, this seems like maybe if I wrote a spec it would work, but as soon as I write a spec (and I do include spec_helper) /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) from /{myapp}/app/models/user_session.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:138:in `block (2 levels) in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:137:in `block in eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/engine.rb:135:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:108:in `eager_load!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application/finisher.rb:41:in `block in <module:Finisher>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `instance_exec' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:25:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:50:in `block in run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `each' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/initializable.rb:49:in `run_initializers' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:134:in `initialize!' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/railties-3.0.3/lib/rails/application.rb:77:in `method_missing' from /{myapp}/config/environment.rb:5:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/spec_helper.rb:3:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from /{myapp}/spec/controllers/pages_controller_spec.rb:1:in `<top (required)>' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `block in load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `map' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/configuration.rb:388:in `load_spec_files' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/command_line.rb:18:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:55:in `run_in_process' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:46:in `run' from /Users/samliu/.rvm/gems/ruby-1.9.2-p0@rails3/gems/rspec-core-2.3.1/lib/rspec/core/runner.rb:10:in `block in autorun' The important line here seems to be /core/backward_compatibility.rb:20:in `const_missing': uninitialized constant Authlogic (NameError) Now if this were rails 2.3.8, I'd simply put config.gem "authlogic" into the environment.rb, in the initialization code block. However, the rails 3 environment.rb looks way different (there is no config code block, so putting it in arbitrarily causes an error where config is not defined). So my questions are 1) Do I actually have to put the gem config anywhere? I looked at https://github.com/trevmex/authlogic_rails3_example/ and it seems he didn't put it anywhere. 2) Does anyone know what I'm doing wrong in terms of rspec? My gem list is *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) actionpack (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activemodel (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) activerecord (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activeresource (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) activesupport (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2, 2.3.4) arel (2.0.6, 1.0.1) asdf (0.5.0) authlogic (2.1.6, 2.1.3) autotest (4.4.6, 4.4.1) autotest-fsevent (0.2.4) autotest-growl (0.2.9) autotest-rails (4.1.0) autotest-rails-pure (4.1.2) bluecloth (2.0.9) builder (2.1.2) bundler (1.0.7, 1.0.2) cgi_multipart_eof_fix (2.5.0) commonwatir (1.6.2) couchrest (0.33) cri (1.0.1) cucumber (0.4.4, 0.4.3, 0.3.11) daemons (1.1.0, 1.0.10) dependencies (0.0.7) diff-lcs (1.1.2) erubis (2.6.6) fastercsv (1.5.0) fastthread (1.0.7) firewatir (1.6.2) flay (1.4.0) flog (2.2.0) funfx (0.2.2) gem_plugin (0.2.3) gemsonrails (0.7.2) giraffesoft-resource_controller (0.6.5) haml (2.2.14) hoe (2.3.3) i18n (0.4.1) jscruggs-metric_fu (1.1.5) json_pure (1.1.9) kramdown (0.12.0) mail (2.2.13, 2.2.6.1) memcache-client (1.8.5) mime-types (1.16) mojombo-chronic (0.3.0) mongrel (1.1.5) monk (0.0.7) nanoc (3.1.5) nanoc3 (3.1.5) nokogiri (1.4.3.1, 1.4.0) open4 (0.9.6) polyglot (0.3.1, 0.2.9) rack (1.2.1, 1.0.1) rack-mount (0.6.13) rack-test (0.5.6) rails (3.0.0, 2.3.4) rails3-generators (0.17.0, 0.14.0) railties (3.0.3, 3.0.1, 3.0.0, 3.0.0.rc2) rake (0.8.7) relevance-rcov (0.9.2.1) rest-client (1.0.3) rspec (2.3.0, 2.0.0.rc, 1.2.9) rspec-core (2.3.1, 2.0.0.rc) rspec-expectations (2.3.0, 2.0.0.rc) rspec-mocks (2.3.0, 2.0.0.rc) rspec-rails (2.3.1, 2.0.0.rc, 1.2.9) ruby_parser (2.0.4) rubyforge (2.0.3) rubygems-update (1.3.6, 1.3.5) rvm (1.0.13) s4t-utils (1.0.4) safariwatir (0.3.7) sexp_processor (3.0.3) spork (0.7.3) sqlite3-ruby (1.3.1, 1.2.5) sys-uname (0.8.5) term-ansicolor (1.0.4) text-format (1.0.0) text-hyphen (1.0.0) thor (0.14.6, 0.14.3, 0.12.0) treetop (1.4.8, 1.4.2) tzinfo (0.3.23) user-choices (1.1.6) vlad (2.0.0) vlad-git (2.1.0) webrat (0.7.1, 0.6.0, 0.5.3) xml-simple (1.0.12) ZenTest (4.4.2) I am using ruby 1.9.2 and rails 3.0.3 installed using RVM on OSX 10.6 Snow Leopard. I just want to be able to run my specs like I used to. As a separate issue, autotest yields an error about an include for autotest/growl but I installed autotest-growl. Maybe this is a gem issue? I tried doing the same things and get the same error when it comes to using my ubuntu 10.04 server machine though. Gemfile source 'http://rubygems.org' gem 'rails', '3.0.3' # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' group :couch do gem 'couchrest' end group :user_auth do gem 'authlogic' gem "rails3-generators" gem 'facebooker' end group :markup do gem 'haml' gem 'sass' end group :testing do gem 'rspec-rails' gem 'rspec' gem 'webrat' gem 'cucumber' gem 'capybara' gem 'factory_girl' gem 'shoulda' gem 'autotest' end group :server do gem 'unicorn' end # Use unicorn as the web server # gem 'unicorn' # Deploy with Capistrano # gem 'capistrano' # To use debugger # gem 'ruby-debug' # Bundle the extra gems: # gem 'bj' # gem 'nokogiri' # gem 'sqlite3-ruby', :require => 'sqlite3' # gem 'aws-s3', :require => 'aws/s3' # Bundle gems for the local environment. Make sure to # put test-only gems in this group so their generators # and rake tasks are available in development mode: # group :development, :test do # gem 'webrat' # end Gemfile.lock GEM remote: http://rubygems.org/ specs: ZenTest (4.4.2) abstract (1.0.0) actionmailer (3.0.3) actionpack (= 3.0.3) mail (~> 2.2.9) actionpack (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) erubis (~> 2.6.6) i18n (~> 0.4) rack (~> 1.2.1) rack-mount (~> 0.6.13) rack-test (~> 0.5.6) tzinfo (~> 0.3.23) activemodel (3.0.3) activesupport (= 3.0.3) builder (~> 2.1.2) i18n (~> 0.4) activerecord (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) arel (~> 2.0.2) tzinfo (~> 0.3.23) activeresource (3.0.3) activemodel (= 3.0.3) activesupport (= 3.0.3) activesupport (3.0.3) arel (2.0.6) authlogic (2.1.6) activesupport autotest (4.4.6) ZenTest (>= 4.4.1) builder (2.1.2) capybara (0.4.0) celerity (>= 0.7.9) culerity (>= 0.2.4) mime-types (>= 1.16) nokogiri (>= 1.3.3) rack (>= 1.0.0) rack-test (>= 0.5.4) selenium-webdriver (>= 0.0.27) xpath (~> 0.1.2) celerity (0.8.6) childprocess (0.1.6) ffi (~> 0.6.3) couchrest (1.0.1) json (>= 1.4.6) mime-types (>= 1.15) rest-client (>= 1.5.1) cucumber (0.10.0) builder (>= 2.1.2) diff-lcs (~> 1.1.2) gherkin (~> 2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) culerity (0.2.13) diff-lcs (1.1.2) erubis (2.6.6) abstract (>= 1.0.0) facebooker (1.0.75) json_pure (>= 1.0.0) factory_girl (1.3.2) ffi (0.6.3) rake (>= 0.8.7) gherkin (2.3.2) json (~> 1.4.6) term-ansicolor (~> 1.0.5) haml (3.0.25) i18n (0.5.0) json (1.4.6) json_pure (1.4.6) kgio (2.0.0) mail (2.2.13) activesupport (>= 2.3.6) i18n (>= 0.4.0) mime-types (~> 1.16) treetop (~> 1.4.8) mime-types (1.16) nokogiri (1.4.4) polyglot (0.3.1) rack (1.2.1) rack-mount (0.6.13) rack (>= 1.0.0) rack-test (0.5.6) rack (>= 1.0) rails (3.0.3) actionmailer (= 3.0.3) actionpack (= 3.0.3) activerecord (= 3.0.3) activeresource (= 3.0.3) activesupport (= 3.0.3) bundler (~> 1.0) railties (= 3.0.3) rails3-generators (0.17.0) railties (>= 3.0.0) railties (3.0.3) actionpack (= 3.0.3) activesupport (= 3.0.3) rake (>= 0.8.7) thor (~> 0.14.4) rake (0.8.7) rest-client (1.6.1) mime-types (>= 1.16) rspec (2.3.0) rspec-core (~> 2.3.0) rspec-expectations (~> 2.3.0) rspec-mocks (~> 2.3.0) rspec-core (2.3.1) rspec-expectations (2.3.0) diff-lcs (~> 1.1.2) rspec-mocks (2.3.0) rspec-rails (2.3.1) actionpack (~> 3.0) activesupport (~> 3.0) railties (~> 3.0) rspec (~> 2.3.0) rubyzip (0.9.4) sass (3.1.0.alpha.206) selenium-webdriver (0.1.2) childprocess (~> 0.1.5) ffi (~> 0.6.3) json_pure rubyzip shoulda (2.11.3) sqlite3-ruby (1.3.2) term-ansicolor (1.0.5) thor (0.14.6) treetop (1.4.9) polyglot (>= 0.3.1) tzinfo (0.3.23) unicorn (3.1.0) kgio (~> 2.0.0) rack webrat (0.7.2) nokogiri (>= 1.2.0) rack (>= 1.0) rack-test (>= 0.5.3) xpath (0.1.2) nokogiri (~> 1.3) PLATFORMS ruby DEPENDENCIES authlogic autotest capybara couchrest cucumber facebooker factory_girl haml rails (= 3.0.3) rails3-generators rspec rspec-rails sass shoulda sqlite3-ruby unicorn webrat

    Read the article

  • Watchguard SSLVPN user connection issue

    - by Tory Newnham
    I have a user that needs access to our SSLVPN on our Watchguard firewall from his company issued laptop. The problem is when he tries to connect as himself he cannot connect. If I login to the machine it works fine, if I add him to the domain admins group in Active Directory it works fine… So, we know it is an access issue but I cannot figure out what access he needs. He is in the SSLVPN-Users group which I thought would give them all the access they needed but apparently not… Here is the output of the SSLVPN Logs when trying to connect: 2012-09-14T15:40:55.834 Launching WatchGuard Mobile VPN with SSL client. Version 11.5.3 (Build 339447) Built:Apr 5 2012 00:25:00 2012-09-14T15:41:18.832 Requesting client configuration from X.X.X.X:443 2012-09-14T15:41:20.386 VERSION file is 5.15, client version is 5.15 2012-09-14T15:41:21.924 Error: connect() failed. ret = -1 errno=10061 (...) 2012-09-14T15:41:23.960 Error: connect() failed. ret = -1 errno=10061 2012-09-14T15:42:00.788 Failed Launch Has anyone had the same issue, or have any ideas on what Group Policy changes need to be made in order for him to have access but not be a Domain Admin? Thanks in Advance!

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >