Search Results

Search found 11313 results on 453 pages for 'rampant creative group'.

Page 44/453 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Group multiple media queries formed as output of LESS css

    - by Goje87
    I was planning to use LESS css in my project (PHP). I am planning to use its nested @media query feature. I find that it fails to group the multiple media queries in the output css it generates. For example: // LESS .header { @media all and (min-width: 240px) and (max-width: 319px) { font-size: 12px; } @media all and (min-width: 320px) and (max-width: 479px) { font-size: 16px; font-weight: bold; } } .body { @media all and (min-width: 240px) and (max-width: 319px) { font-size: 10px; } @media all and (min-width: 320px) and (max-width: 479px) { font-size: 12px; } } // output CSS @media all and (min-width: 240px) and (max-width: 319px) { .header { font-size: 12px; } } @media all and (min-width: 320px) and (max-width: 479px) { .header { font-size: 16px; font-weight: bold; } } @media all and (min-width: 240px) and (max-width: 319px) { .body { font-size: 10px; } } @media all and (min-width: 320px) and (max-width: 479px) { .body { font-size: 12px; } } My expected output is (@media queries grouped) @media all and (min-width: 240px) and (max-width: 319px) { .header { font-size: 12px; } .body { font-size: 10px; } } @media all and (min-width: 320px) and (max-width: 479px) { .header { font-size: 16px; font-weight: bold; } .body { font-size: 12px; } } I would like to know if it can be done in LESS it self or is there any simple CSS parser I can use to manipulate the output CSS to group the @media queries.

    Read the article

  • How To: Group Column Headings In WPF.

    - by VoidDweller
    -------------------------------------------------------------------------- |[ Common Category ]|[ Dynamic Category 0 ]|[ Dynamic Category N ]| -------------------------------------------------------------------------- |[Header 1]|[Header 2]|[ Type 0 ]|[ Type N ]|[ Type 0 ]|[ Type N ]| -------------------------------------------------------------------------- |[Data 2 Group] | -------------------------------------------------------------------------- | Data A | Data 2 || Null | Data 1 || Data 0 | Data 1 || | Data B | Data 2 || Data 0 | Null || Data 0 | Data 1 || -------------------------------------------------------------------------- |[Data 1 Group] | -------------------------------------------------------------------------- | Data C | Data 1 || Null | Data 1 || Data 0 | Data 1 || | Data D | Data 1 || Null | Null || Data 0 | Null || -------------------------------------------------------------------------- Dynamic Category: Columns can be 0 or more. Must contain 1 or more Type Columns. Will only be displayed if any row contains Type Column data associated with it. Data Rows: Will be added Asynchronously. Will be grouped by a Common Category column. Will add a Dynamic Category if it does not yet exist. Will add a Type Column if it does not yet exist within its appropriate Dynamic Category. Platform Info: WPF .Net 3.5 sp1 C# I have a few partially functional prototypes, but each has it's own major set of problems. Can any of you give me some guidance on this? A working prototype would be even better! Envision this nicely styled. :-)

    Read the article

  • Ordering view by highest group count question - Ruby on Rails

    - by bgadoci
    I've read the couple of questions about this on stack overflow but can't seem to find the answer. I am trying to display the tags in my blog by the ones with the highest count in the tags table. Thanks to KandadaBoggu for helping me get the tags feature of the blog I am designing working. Here is the basics and my question. Tag belongs_to :post and Post has_many :tags. The tags table is simple really, consisting of the normal scaffolded fields plus post_id and tag_name (I actually called the column 'tag_name' instead of just 'name'). in my /views/posts/index.html/erb file I correctly am displaying the tags by group and the amount of times they are being used (appearing in the tags table). I just want to know how to order them by the highest count. Here is the code, and I currently have it set to updated_at: PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) conditions, joins = {}, nil unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = :tags end @posts=Post.all(:joins => joins, :conditions=> conditions, :order => 'created_at DESC').paginate :page => params[:page], :per_page => 5 respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • User must have accepted TOS - Facebook Graph API error when posting photos to group page

    - by user370309
    Hi all, I've been struggling to upload an image from the user's computer and posted to our group page using the Facebook Graph API. I was able to send a post request to facebook with the image however, I'm getting this error back: ERROR: (#200) User must have accepted TOS. To some extent, I don't believe that I need the user to authorize himself as the photo is being uploaded to our group page. This below, is the code i'm using: if($albumId != null) { $args = array( 'message' = $description ); $args[basename($photoPath)] = '@' . realpath($photoPath); $ch = curl_init(); $url = 'https://graph.facebook.com/'.$albumId.'/photos?'.$token; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $args); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $data = curl_exec($ch); $photoId = json_decode($data, true); if(isset($photoId['error'])) die('ERROR: '.$photoId['error']['message']); $temp = explode('.', sprintf('%f', $photoId['id'])); $photoId = $temp[0]; return $photoId; } Can somebody tell me if I need to request extra permissions from the user or what i'm doing wrong? Thanks very much!

    Read the article

  • Hudson fails to use unix user/group to do authentication

    - by Kane
    I'm trying to use unix user/group database as security realm of hudson. The linux server is using NIS for user management. My account could login the hudson server via ssh. And the hudson server is running by user 'hudson' that is also a member of group 'shadow', so hudson could read /etc/shadow. And I tested the configuration using 'test' button, hudson tells me it works well. But I can't use my unix account and password to login the hudson sever. And I found below java exception in the log of hudson, Jan 12, 2011 8:23:42 AM hudson.security.AuthenticationProcessingFilter2 onUnsuccessfulAuthentication INFO: Login attempt failed org.acegisecurity.BadCredentialsException: pam_authenticate failed : Authentication failure; nested exception is org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure at hudson.security.PAMSecurityRealm$PAMAuthenticationProvider.authenticate(PAMSecurityRealm.java:100) at org.acegisecurity.providers.ProviderManager.doAuthentication(ProviderManager.java:195) at org.acegisecurity.AbstractAuthenticationManager.authenticate(AbstractAuthenticationManager.java:45) at org.acegisecurity.ui.webapp.AuthenticationProcessingFilter.attemptAuthentication(AuthenticationProcessingFilter.java:71) at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:252) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.ui.basicauth.BasicProcessingFilter.doFilter(BasicProcessingFilter.java:173) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249) at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:66) at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87) at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76) at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:164) at winstone.FilterConfiguration.execute(FilterConfiguration.java:195) at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368) at winstone.RequestDispatcher.forward(RequestDispatcher.java:333) at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:244) at winstone.RequestHandlerThread.run(RequestHandlerThread.java:150) at java.lang.Thread.run(Thread.java:595) Caused by: org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure at org.jvnet.libpam.PAM.check(PAM.java:105) at org.jvnet.libpam.PAM.authenticate(PAM.java:123) at hudson.security.PAMSecurityRealm$PAMAuthenticationProvider.authenticate(PAMSecurityRealm.java:90) ... 18 more

    Read the article

  • Linq Getting Customers group by date and then by their type

    - by Nitin varpe
    I am working on generating report for showing customer using LINQ in C#. I want to show no. of customers of each type. There are 3 types of customer registered, guest and manager. I want to group by customers by registered date and then by type of customer. i.e If today 3 guest, 4 registered and 2 manager are inserted. and tomorrow 4,5 and 6 are registered resp. then report should show Number of customers registerd on the day . separate row for each type. DATE TYPEOF CUSTOMER COUNT 31-10-2013 GUEST 3 31-10-2013 REGISTERED 4 31-10-2013 MANAGER 2 30-10-2013 GUEST 5 30-10-2013 REGISTERED 10 30-10-2013 MANAGER 3 LIKE THIS . var subquery = from eat in _customerRepo.Table group eat by new { yy = eat.CreatedOnUTC.Value.Year, mm = eat.CreatedOnUTC.Value.Month, dd = eat.CreatedOnUTC.Value.Day } into g select new { Id = g.Min(x => x.Id) }; var query = from c in _customerRepo.Table join cin in subquery.Distinct() on c.Id equals cin.Id select c; By above query I get minimum cutomers registerd on that day Thanks in advance

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Collapsing data frame by selecing one row per group

    - by jkebinger
    I'm trying to collapse a data frame by removing all but one row from each group of rows with identical values in a particular column. In other words, the first row from each group. For example, I'd like to convert this > d = data.frame(x=c(1,1,2,4),y=c(10,11,12,13),z=c(20,19,18,17)) > d x y z 1 1 10 20 2 1 11 19 3 2 12 18 4 4 13 17 Into this: x y z 1 1 11 19 2 2 12 18 3 4 13 17 I'm using aggregate to do this currently, but the performance is unacceptable with more data: > d.ordered = d[order(-d$y),] > aggregate(d.ordered,by=list(key=d.ordered$x),FUN=function(x){x[1]}) I've tried split/unsplit with the same function argument as here, but unsplit complains about duplicate row numbers. Is rle a possibility? Is there an R idiom to convert rle's length vector into the indices of the rows that start each run, which I can then use to pluck those rows out of the data frame?

    Read the article

  • JDO, GAE: Load object group by child's key

    - by tiex
    I have owned one-to-many relationship between two objects: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class AccessInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private com.google.appengine.api.datastore.Key keyInternal; ... @Persistent private PollInfo currentState; public AccessInfo(){} public AccessInfo(String key, String voter, PollInfo currentState) { this.voter = voter; this.currentState = currentState; setKey(key); // this key is unique in whole systme } public void setKey(String key) { this.keyInternal = KeyFactory.createKey( AccessInfo.class.getSimpleName(), key); } public String getKey() { return this.keyInternal.getName(); } and @PersistenceCapable(identityType = IdentityType.APPLICATION) public class PollInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent(mappedBy = "currentState") private List<AccessInfo> accesses; ... I created an instance of class PollInfo and make it persistence. It is ok. But then I want to load this group by AccessInfo key, and I am getting exception NucleusObjectNotFoundException. Is it possible to load a group by child's key?

    Read the article

  • Using hashing to group similar records

    - by Neil Dobson
    I work for a fulfillment company and we have to pack and ship many orders from our warehouse to customers. To improve efficiency we would like to group identical orders and pack these in the most optimum way. By identical I mean having the same number of order lines containing the same SKUs and same order quantities. To achieve this I was thinking about hashing each order. We can then group by hash to quickly see which orders are the same. We are moving from an Access database to a PostgreSQL database and we have .NET based systems for data loading and general order processing systems, so we can either do the hashing during the data loading or hand this task over to the DB. My question firstly is should the hashing be managed by DB, possibly using triggers, or should the hash be created on-the-fly using a view or something? And secondly would it be best to calculate a hash for each order line and then to combine these to find an order-level hash for grouping, or should I just use a trigger for all CRUD operations on the order lines table which re-calculates a single hash for the entire order and store the value in the orders table? TIA

    Read the article

  • apache solr : sum of data resulted from group by

    - by Terance Dias
    Hi, We have a requirement where we need to group our records by a particular field and take the sum of a corresponding numeric field e.x. select userid, sum(click_count) from user_action group by userid; We are trying to do this using apache solr and found that there were 2 ways of doing this: Using the field collapsing feature (http://blog.jteam.nl/2009/10/20/result-grouping-field-collapsing-with-solr/) but found 2 problems with this: 1.1. This is not part of release and is available as patch so we are not sure if we can use this in production. 1.2. We do not get the sum back but individual counts and we need to sum it at the client side. Using the Stats Component along with faceted search (http://wiki.apache.org/solr/StatsComponent). This meets our requirement but it is not fast enough for very large data sets. I just wanted to know if anybody knows of any other way to achieve this. Appreciate any help. Thanks, Terance.

    Read the article

  • Access values of a group of select boxes in php from post

    - by 2Real
    Hi, I'm new to PHP, and I can't figure this out. I'm trying to figure out how to access the data of a group of select boxes I have defined in my HTML code. I tried grouping them as a class, but that doesn't seem to work... maybe I was doing it wrong. This is the following HTML code. <form action="" method="post"> <select class="foo"> <option> 1.....100</option> </select> <select class="foo"> <option> 1.... 500></option> </select> <input type="submit" value="Submit" name="submit"/> </form> I essentially want to group all my select boxes and access all the values in my PHP code. Thanks

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • mySQL select and group by values

    - by Foo
    I'd like to count and group rows by specific values. This seems fairly simple, but I can't seem to do it. I have a table set up similar to this: Table: Ratings id pID uID rating 1 1 2 7 2 1 7 7 3 1 5 4 4 1 1 1 id is the primary key, piD and uID are foreign-keys. Rating contains values between 1 and 10, and only between 1 and 10. I want to run some statistics and count the number of ratings with a certain value. In the example above, two have left a rating of 7. So I wrote the following query: SELECT COUNT(*) AS 'count' , 'rating' FROM 'ratings' WHERE pID= '1' GROUP BY `rating` ORDER BY `rating` Which yields the nice result as: count ratings 1 1 1 4 2 7 I'd like to get the mySQL query to include values between 1 and 10 as well. For example: Desired Result count ratings 1 1 0 2 0 3 1 4 0 5 0 6 2 7 0 8 0 9 0 10 Unfortunately, I'm relatively new to SQL and I've been reading through everything I could get my hands on for the past hour, but I can't get it to work. I've been leaning along the lines of a some type of JOIN. If anyone can point me in the right direction, it'd be appreciated. Thanks.

    Read the article

  • cakephp group based permissions

    - by Elwhis
    Hey guys, I would like to have group based restrictions that would allow users to access only specified parts of the web. I am new to the whole ACL stuff and I didn't quite get it from the manual :/ therefore I would like to ask some questions. But before any questions, my routes look like this: Router::connect('/', array('controller' => 'users', 'action' => 'login')); Router::connect('/admin/:controller/:action/*', array('prefix' => 'admin', 'admin' => true)); Router::connect('/registered/:controller/:action/*', array('prefix' => 'registered', 'registered' => true)); 1.) How do I restrict users from any other group than Administrator to access ONLY the /registered/ part of the web 2.) How do I prevent anyone from using the default addresses like www.example.com/users/add on a global scale (I want only www.example.com/admin/users/add or www.example.com/registered/users/add type of addresses)? This kind of addresses is not event set in the routes.php but they still work. Any answers apprecated

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • LinqtoSql Pre-compile Query problem with Count() on a group by

    - by Joe Pitz
    Have a LinqtoSql query that I now want to precompile. var unorderedc = from insp in sq.Inspections where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime && insp.Model == "EP" && insp.TestResults != "P" group insp by new { insp.TestResults, insp.FailStep } into grp select new { FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0), CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0), grp.Key.TestResults, grp.Key.FailStep, PercentFailed = Convert.ToDecimal(1.0 * grp.Count() / tcount * 100) }; I have created this delegate: public static readonly Funct<SQLDataDataContext, int, string, string, DateTime, DateTime, IQueryable<CalcFailedTestResult>> GetInspData = CompiledQuery.Compile((SQLDataDataContext sq, int tcount, string strModel, string strTest, DateTime dStartTime, DateTime dEndTime, IQueryable<CalcFailedTestResult> CalcFailed) => from insp in sq.Inspections where insp.TestTimeStamp > dStartTime && insp.TestTimeStamp < dEndTime && insp.Model == strModel && insp.TestResults != strTest group insp by new { insp.TestResults, insp.FailStep } into grp select new { FailedCount = (grp.Key.TestResults == "F" ? grp.Count() : 0), CancelCount = (grp.Key.TestResults == "C" ? grp.Count() : 0), grp.Key.TestResults, grp.Key.FailStep, PercentFailed = Convert.ToDecimal(1.0 * grp.Count() / tcount * 100) }); The syntax error is on the CompileQuery.Compile() statement It appears to be related to the use of the select new {} syntax. In other pre-compiled queries I have written I have had to just use the select projection by it self. In this case I need to perform the grp.count() and the immediate if logic. I have searched SO and other references but cannot find the answer.

    Read the article

  • getting double value from group concact

    - by Sackling
    I am having a problem where I am getting duplicated values from what I think I should be getting. here is my sql: SELECT DISTINCT p.products_image, pd.products_name, p.products_id, p.products_model, p.manufacturers_id, m.manufacturers_name, p.products_price, p.products_sort_order, p.products_tax_class_id, pd.products_viewed, group_concat(p2i.icons_id separator ",") AS icon_ids, group_concat(pi.icon_class separator ",") AS icon_class, IF(s.status, s.specials_new_products_price, NULL) AS specials_new_products_price, IF(s.status, s.specials_new_products_price, p.products_price) AS final_price FROM products p LEFT JOIN specials s ON p.products_id = s.products_id LEFT JOIN manufacturers m ON p.manufacturers_id = m.manufacturers_id JOIN products_description pd ON p.products_id = pd.products_id JOIN products_to_categories p2c ON p.products_id = p2c.products_id INNER JOIN products_specifications ps7 ON p.products_id = ps7.products_id LEFT JOIN products_to_icon p2i ON p.products_id = p2i.products_id LEFT JOIN products_icons pi ON p2i.icons_id = pi.icons_id WHERE p.products_status = '1' AND pd.language_id = '1' AND ps7.specification IN ('Polycotton' , 'Reflective') AND ps7.specifications_id = '7' AND ps7.language_id = '1' AND p2c.categories_id = '21' GROUP BY p.products_id ORDER BY p.products_sort_order The column that is getting double values is icon_ids from the group concact. This seams to happen only if ploycotton, and reflective are both IN ps7.specification. If it is only one or the other then it works fine. The products_to_icon table contains 2 columns, products_id and icons_id. If a product has 2 icons, there are 2 rows so I'm pretty sure it is this fact that is causing the duplicate icons ids. When I run this, the icon_ids column for a product with 2 icons is "4,4,6,6" for example, when what I need is "4,6"

    Read the article

  • AWS: setting up auto-scale for EC2 instances

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/16/aws-setting-up-auto-scale-for-ec2-instances.aspxWith Amazon Web Services, there’s no direct equivalent to Azure Worker Roles – no Elastic Beanstalk-style application for .NET background workers. But you can get the auto-scale part by configuring an auto-scaling group for your EC2 instance. This is a step-by-step guide, that shows you how to create the auto-scaling configuration, which for EC2 you need to do with the command line, and then link your scaling policies to CloudWatch alarms in the Web console. I’m using queue size as my metric for CloudWatch,  which is a good fit if your background workers are pulling messages from a queue and processing them.  If the queue is getting too big, the “high” alarm will fire and spin up a new instance to share the workload. If the queue is draining down, the “low” alarm will fire and shut down one of the instances. To start with, you need to manually set up your app in an EC2 VM, for a background worker that would mean hosting your code in a Windows Service (I always use Topshelf). If you’re dual-running Azure and AWS, then you can isolate your logic in one library, with a generic entry point that has Start() and Stop()  functions, so your Worker Role and Windows Service are essentially using the same code. When you have your instance set up with the Windows Service running automatically, and you’ve tested it starts up and works properly from a reboot, shut the machine down and take an image of the VM, using Create Image (EBS AMI) from the Web Console: When that completes, you’ll have your own AMI which you can use to spin up new instances, and you’re ready to create your auto-scaling group. You need to dip into the command-line tools for this, so follow this guide to set up the AWS autoscale command line tool. Now we’re ready to go. 1. Create a launch configuration This launch configuration tells AWS what to do when a new instance needs to be spun up. You create it with the as-create-launch-config command, which looks like this: as-create-launch-config sc-xyz-launcher # name of the launch config --image-id ami-7b9e9f12 # id of the AMI you extracted from your VM --region eu-west-1 # which region the new instance gets created in --instance-type t1.micro # size of the instance to create --group quicklaunch-1 #security group for the new instance 2. Create an auto-scaling group The auto-scaling group links to the launch config, and defines the overall configuration of the collection of instances: as-create-auto-scaling-group sc-xyz-asg # auto-scaling group name --region eu-west-1 # region to create in --launch-configuration sc-xyz-launcher # name of the launch config to invoke for new instances --min-size 1 # minimum number of nodes in the group --max-size 5 # maximum number of nodes in the group --default-cooldown 300 # period to wait (in seconds) after each scaling event, before checking if another scaling event is required --availability-zones eu-west-1a eu-west-1b eu-west-1c # which availability zones you want your instances to be allocated in – multiple entries means EC@ will use any of them 3. Create a scale-up policy The policy dictates what will happen in response to a scaling event being triggered from a “high” alarm being breached. It links to the auto-scaling group; this sample results in one additional node being spun up: as-put-scaling-policy scale-up-policy # policy name -g sc-psod-woker-asg # auto-scaling group the policy works with --adjustment 1 # size of the adjustment --region eu-west-1 # region --type ChangeInCapacity # type of adjustment, this specifies a fixed number of nodes, but you can use PercentChangeInCapacity to make an adjustment relative to the current number of nodes, e.g. increasing by 50% 4. Create a scale-down policy The policy dictates what will happen in response to a scaling event being triggered from a “low” alarm being breached. It links to the auto-scaling group; this sample results in one node from the group being taken offline: as-put-scaling-policy scale-down-policy -g sc-psod-woker-asg "--adjustment=-1" # in Windows, use double-quotes to surround a negative adjustment value –-type ChangeInCapacity --region eu-west-1 5. Create a “high” CloudWatch alarm We’re done with the command line now. In the Web Console, open up the CloudWatch view and create a new alarm. This alarm will monitor your metrics and invoke the scale-up policy from your auto-scaling group, when the group is working too hard. Configure your metric – this example will fire the alarm if there are more than 10 messages in my queue for over a minute: Then link the alarm to the scale-up policy in your group: 6. Create a “low” CloudWatch alarm The opposite of step 4, this alarm will trigger when the instances in your group don’t have enough work to do (e.g fewer than 2 messages in the queue for 1 minute), and will invoke the scale-down policy. And that’s it. You don’t need your original VM as the auto-scale group has a minimum number of nodes connected. You can test out the scaling by flexing your CloudWatch metric – in this example, filling up a queue from a  stub publisher – and watching AWS create new nodes as required, then stopping the publisher and watch AWS kill off the spare nodes.

    Read the article

  • Upgraded AGPM Server cannot connect to relocated archive

    - by thommck
    We were using the Advanced Group Policy Management (AGPM) v3.0 on out Windows Server 2008 DC. It kept the archive on the C: drive. When we upgraded to AGPM v4 we relocated the archive to the D: drive. Now when we try to look at a GPO's hisory in GPMC we get the following error Failed to connect to the AGPM Server. The following error occurred: The server was unable to process the request due to an internal error. For more information about the error, either turn on IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute or from the configuration behavior) on the server in order to send the exception information back to the client, or turn on tracing as per the Microsoft .NET Framework 3.0 SDK documentation and inspect the server trace logs. System.ServiceModel.FaultException (80131501) You are able to click Retry or Cancel. Retry brings up the same error and Cancel takes you back to GPMC and the History tab displays "Archive not found". I installed the client on a Windows 7 computer (which is a n unsupported set up) and it could read the server archive without any issues. I followed the TechNet article "Move the AGPM Server and the Archive" but that didn't make a difference How can I tell the server where the archive is?

    Read the article

  • ActiveSync devices causing accounts to lockout

    - by Abdullah
    When a user changes his account password for whatever reason (read: expired), and the old password is stored in his mobile device connected through EAS. This will cause his account almost immediately - as it should according to the lockout policy defined in the AD. It was easy to figure out that part. The hard part is keeping it from happening. I looked everywhere. Nothing. Basically there are four parts to the puzzle: the EAS device, the TMG (ISA) server, the EAS protocol and finally the AD. None of them have a way to stop the EAS device from failing to authenticate. So I figured I'll have to come up with a clever workaround. And the only thing I could come up with is to create a group for all EAS users and exclude them from the lockout policy, which obviously defeats the whole purpose of the policy, or to educate the users to update their devices with the new passwords, which is impossible. The question: Can you think of any other way to prevent EAS from locking out the accounts? Environment: Mostly iOS devices all through EAS. TMG 2010. Exchange 2007. AD 2008 R2.

    Read the article

  • Cannot Change "Log on through Terminal Services" in Local Security Policy XP from Server 2008 GP

    - by Campo
    This is a mixed AD environment, Server 2003 R2 and 2008 R2 I have a 2003 AD R2 and a 2008 R2 AD. GPO is usually managed from the 2008 R2 machine. I have a RD Gateway on another server as well. I setup the CAP and RAP to allow a normal user to log on to the departments workstation. I also adjusted the GPO for that OU to allow Log on trhough Remote Desktop Gateway for the user group. This worked on my windows 7 workstation. But unfortunately the policy is a different name in XP "allow log on through Terminal Services" I can get through right into the machine but when the log on actually happens to the local machine i get the "Cannot log on interactively" error. This is set in (for the local machine) Secpol.msc Local Security Policy "user rights assignment" but is controlled by the GPO in Computer Configuration Policies Security Settings Local Policies "User Rights Assignment" Do I simply need to adjust the same setting on the same GPO but with a server 2003 GP editor? Feel like that could cause issues... Looking for some direction. Or if anyone has run into this issue yet. UPDATE Should this work? support.microsoft.com/kb/186529 Still seems like I will have the issue as the actual GP settings for Log on through Terminal Services is still different between Server 2008 R2 and 2003 R2.... Another Thought: Should I delete the GPO made for the department and remake it with the 2003 R2 server? I have no 2008 specific settings as the whole department runs XP other than myself. If that's a solution I will move my computer out of the department as a solution... Thoughts?

    Read the article

  • Users and Groups management on 7 Home Premium

    - by AviD
    Recently upgraded the home pc from XP pro, to Windows 7 Home Premium. I'm looking for a solution for a few things that seem to be missing from this edition... Since Local Users and Groups is blocked on Home Premium, I can't figure out how to manage groups, or even do anything even slightly advanced to users (basically, create/group/picture is it). net localgroup, net users, net etc dont seem to work - getting "system error 5". While I'm on the topic, I cant activate (what was once) "Local Security Policy"... Looking for any help, advice, or even a new direction cuz things is differ'nt on Winnows7... To clarify, I'm looking to do some of the following, which were simply back in XP-land: remote user only (i.e. no local logon) Grant special privileges for specific user grant access to e.g. C$ share for specific remote user create custom groups for users, to be able to separate privileges of say, my wife's from my kids define quite specifically what each user can do (beyond just standard users) Harden OS (hmm, i guess maybe what i'm looking for is security hardening guide for 7...?)

    Read the article

  • How can I make WSUS less invasive for our users?

    - by Cypher
    We have WSUS pushing updates out to our user's workstations, and things are going relatively well with one annoying caveat: there seems to be an issue with a pop-up being displayed in front of some users informing them that their machine will be rebooted in 15 minutes, and they have nothing to say about it: This may be because they did not log out the prior night. Nevertheless, this is a bit too much and is very counter-productive for our users. Here is a bit about our environment: Our users are running Windows XP Pro and are part of an Active Directory Domain. WSUS is being applied via Group Policy. Here is a snapshot of the GPO that is enforcing the WSUS rules: Here is how I want WSUS to work (ideally - I'll take whatever can get me close): I want updates to automatically download and install every night. If a user is not logged in, I would like the machine to reboot. If a user is logged in, I would like their machine not to reboot, but instead wait until the next "installation period" where it can perform any other needed installations and reboot then (provided the a user account is not still logged in). If a user is to be prompted for reboot, it should only happen once per day (if possible), but every time they are prompted, they must have a way to postpone the reboot. I do not want users to be forced to restart their computer whenever the computer thinks it should happen (unless it's after an update installation and there are no logged in users). That doesn't seem productive to force a system restart in the midst of a person's workday. Is there something that I can do with the GPO that would help make WSUS less intrusive? Even if it gave the user an option to Restart Later - that would be better than what is happening now.

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >