Search Results

Search found 19966 results on 799 pages for 'datetime query'.

Page 251/799 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Can My Personal GMail Query A Remote LDAP Server?

    - by Maarx
    I have a personal GMail account, from which I frequently send e-mail to a great many various users of a specific business. The corporation has been kind enough to provide me with the credentials to access their LDAP server, with which I would like my GMail web client to be able to auto-complete partial addresses or names for which that LDAP server has an entry. Is there any way I can get a personal GMail account (or it's corresponding entire Google account) account to incorporate an LDAP server into it's Contacts? If I cannot get it to query dynamically and on-demand, is there an idiot-proof way (assuming the client permits, which they may not) to query the LDAP server for it's entire database, save it, and bulk import it to GMail? Perhaps, even, something I could set to repeat periodically (weekly, perhaps), without human interaction? If I did the latter, I assume it would be trivial to import all of these contacts under a single category that could be easily manipulated from within the GMail web-based client. I have been a staunch user and supporter of the GMail web-based client since it's instantiation, but this one is kind of a deal-breaker for me. If it's impossible, what do you suggest I do?

    Read the article

  • Performance issues with repeatable loops as control part

    - by djerry
    Hey guys, In my application, i need to show made calls to the user. The user can arrange some filters, according to what they want to see. The problem is that i find it quite hard to filter the calls without losing performance. This is what i am using now : private void ProcessFilterChoice() { _filteredCalls = ServiceConnector.ServiceConnector.SingletonServiceConnector.Proxy.GetAllCalls().ToList(); if (cboOutgoingIncoming.SelectedIndex > -1) GetFilterPartOutgoingIncoming(); if (cboInternExtern.SelectedIndex > -1) GetFilterPartInternExtern(); if (cboDateFilter.SelectedIndex > -1) GetFilteredCallsByDate(); wbPdf.Source = null; btnPrint.Content = "Pdf preview"; } private void GetFilterPartOutgoingIncoming() { if (cboOutgoingIncoming.SelectedItem.ToString().Equals("Outgoing")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Caller.E164.Length > 4 || _filteredCalls[i].Caller.E164.Equals("0")) _filteredCalls.RemoveAt(i); } else if (cboOutgoingIncoming.SelectedItem.ToString().Equals("Incoming")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Called.E164.Length > 4 || _filteredCalls[i].Called.E164.Equals("0")) _filteredCalls.RemoveAt(i); } } private void GetFilterPartInternExtern() { if (cboInternExtern.SelectedItem.ToString().Equals("Intern")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Called.E164.Length > 4 || _filteredCalls[i].Caller.E164.Length > 4 || _filteredCalls[i].Caller.E164.Equals("0")) _filteredCalls.RemoveAt(i); } else if (cboInternExtern.SelectedItem.ToString().Equals("Extern")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if ((_filteredCalls[i].Called.E164.Length < 5 && _filteredCalls[i].Caller.E164.Length < 5) || _filteredCalls[i].Called.E164.Equals("0")) _filteredCalls.RemoveAt(i); } } private void GetFilteredCallsByDate() { DateTime period = DateTime.Now; switch (cboDateFilter.SelectedItem.ToString()) { case "Today": period = DateTime.Today; break; case "Last week": period = DateTime.Today.Subtract(new TimeSpan(7, 0, 0, 0)); break; case "Last month": period = DateTime.Today.AddMonths(-1); break; case "Last year": period = DateTime.Today.AddYears(-1); break; default: return; } for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Start < period) _filteredCalls.RemoveAt(i); } } _filtered calls is a list of "calls". Calls is a class that looks like this : [DataContract] public class Call { private User caller, called; private DateTime start, end; private string conferenceId; private int id; private bool isNew = false; [DataMember] public bool IsNew { get { return isNew; } set { isNew = value; } } [DataMember] public int Id { get { return id; } set { id = value; } } [DataMember] public string ConferenceId { get { return conferenceId; } set { conferenceId = value; } } [DataMember] public DateTime End { get { return end; } set { end = value; } } [DataMember] public DateTime Start { get { return start; } set { start = value; } } [DataMember] public User Called { get { return called; } set { called = value; } } [DataMember] public User Caller { get { return caller; } set { caller = value; } } Can anyone direct me to a better solution or make some suggestions.

    Read the article

  • Enhanced Dynamic Filtering

    - by Ricardo Peres
    Remember my last post on dynamic filtering? Well, this time I'm extending the code in order to allow two levels of querying: Match type, represented by the following options: public enum MatchType { StartsWith = 0, Contains = 1 } And word match: public enum WordMatch { AnyWord = 0, AllWords = 1, ExactPhrase = 2 } You can combine the two levels in order to achieve the following combinations: MatchType.StartsWith + WordMatch.AnyWord Matches any record that starts with any of the words specified MatchType.StartsWith + WordMatch.AllWords Not available: does not make sense, throws an exception MatchType.StartsWith + WordMatch.ExactPhrase Matches any record that starts with the exact specified phrase MatchType.Contains + WordMatch.AnyWord Matches any record that contains any of the specified words MatchType.Contains + WordMatch.AllWords Matches any record that contains all of the specified words MatchType.Contains + WordMatch.ExactPhrase Matches any record that contains the exact specified phrase Here is the code: public static IList Search(IQueryable query, Type entityType, String dataTextField, String phrase, MatchType matchType, WordMatch wordMatch, Int32 maxCount) { String [] terms = phrase.Split(' ').Distinct().ToArray(); StringBuilder result = new StringBuilder(); PropertyInfo displayProperty = entityType.GetProperty(dataTextField); IList searchList = null; MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(entityType, displayProperty.PropertyType); MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(entityType); MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(entityType); MethodInfo distinctMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Distinct" && m.GetParameters().Length == 1).Single().MakeGenericMethod(entityType); MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(entityType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : "Contains", new Type [] { typeof(String) } ); MemberExpression member = Expression.MakeMemberAccess ( Expression.Parameter(entityType, "n"), displayProperty ); MethodCallExpression call = null; LambdaExpression where = null; LambdaExpression orderBy = Expression.Lambda ( member, member.Expression as ParameterExpression ); switch (matchType) { case MatchType.StartsWith: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: throw (new Exception("The match type StartsWith is not supported with word match AllWords")); } break; case MatchType.Contains: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.AndAlso ( where.Body, exp ), where.Parameters.ToArray() ); } break; } break; } query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; if (maxCount != 0) { query = takeMethod.Invoke(null, new Object [] { query, maxCount }) as IQueryable; } searchList = toListMethod.Invoke(null, new Object [] { query }) as IList; return (searchList); } And this is how you'd use it: IQueryable query = ctx.MyEntities; IList list = Search(query, typeof(MyEntity), "Name", "Ricardo Peres", MatchType.Contains, WordMatch.ExactPhrase, 10 /*0 for all*/); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Android SQLite Problem: Program Crash When Try a Query!

    - by Skatephone
    Hi i have a problem programming with android SDK 1.6. I'm doing the same things of the "notepad exaple" but the programm crash when i try some query. If i try to do a query directly in to the DatabaseHelper create() metod it goes, but out of this function it doesn't. Do you have any idea? this is the source: public class DbAdapter { public static final String KEY_NAME = "name"; public static final String KEY_TOT_DAYS = "totdays"; public static final String KEY_ROWID = "_id"; private static final String TAG = "DbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "flowratedb"; private static final String DATABASE_TABLE = "girl_data"; private static final String DATABASE_TABLE_2 = "girl_cyle"; private static final int DATABASE_VERSION = 2; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table "+DATABASE_TABLE+" (id integer, name text not null, totdays int);"; private static final String DATABASE_CREATE_2 = "create table "+DATABASE_TABLE_2+" (ref_id integer, day long not null);"; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); db.execSQL(DATABASE_CREATE_2); db.delete(DATABASE_TABLE, null, null); db.delete(DATABASE_TABLE_2, null, null); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE_2); onCreate(db); } } public DbAdapter(Context ctx) { this.mCtx = ctx; } public DbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createGirl(int id,String name, int totdays) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_ROWID, id); initialValues.put(KEY_NAME, name); initialValues.put(KEY_TOT_DAYS, totdays); return mDb.insert(DATABASE_TABLE, null, initialValues); } public long createGirl_fd_day(int refid, long fd) { ContentValues initialValues = new ContentValues(); initialValues.put("ref_id", refid); initialValues.put("calendar", fd); return mDb.insert(DATABASE_TABLE, null, initialValues); } public boolean updateGirl(int rowId, String name, int totdays) { ContentValues args = new ContentValues(); args.put(KEY_NAME, name); args.put(KEY_TOT_DAYS, totdays); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } public boolean deleteGirlsData() { if (mDb.delete(DATABASE_TABLE_2, null, null)>0) if(mDb.delete(DATABASE_TABLE, null, null)>0) return true; return false; } public Bundle fetchAllGirls() { Bundle extras = new Bundle(); Cursor cur = mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, null, null, null, null, null); cur.moveToFirst(); int tot = cur.getCount(); extras.putInt("tot", tot); int index; for (int i=0;i<tot;i++){ index=cur.getInt(cur.getColumnIndex("_id")); extras.putString("name"+index, cur.getString(cur.getColumnIndex("name"))); extras.putInt("totdays"+index, cur.getInt(cur.getColumnIndex("totdays"))); } cur.close(); return extras; } public Cursor fetchGirl(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public Cursor fetchGirlCD(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE_2, new String[] {"ref_id", "day"}, "ref_id=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } } Tank's Valerio From Italy :)

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Twitter gem - undefined method `stringify_keys’

    - by Piet
    Have you been getting the following errors when running the Twitter gem lately ? /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `send': undefined method `stringify_keys' for # (NoMethodError) from /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `method_missing’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:131:in `deep_update’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:50:in `initialize’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `new’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `fetch’ from test.rb:26 It’s because Twitter has been sending back plain text errors that are treated as a string instead of json and can’t be properly ‘Mashed’ by the Twitter gem. Also check http://github.com/jnunemaker/twitter/issues#issue/6. Without diving into the bowels of the Twitter gem or HTTParty, you could ‘begin…rescue’ this error and try again in 5 minutes. I fixed it by overriding the offending code to return nil and checking for a nil response as follows: module Twitter class Search def fetch(force=false) if @fetch.nil? || force query = @query.dup query[:q] = query[:q].join(' ') query[:format] = 'json' #This line is the hack and whole reason we're monkey-patching at all. response = self.class.get('http://search.twitter.com/search', :query => query, :format => :json) #Our patch: response should be a Hash. If it isnt, return nil. return nil if response.class != Hash @fetch = Mash.new(response) end @fetch end end end (adapted from http://github.com/jnunemaker/twitter/issues#issue/9) If you have a better solution: speak up!

    Read the article

  • EMERGENCY - Major Problems After Perl Module Installed via WHM

    - by Russell C.
    I attempted to install the perl module Net::Twitter::Role::API::Lists using WHM and after doing so my whole site came down. It seems that something that was updated with the install isn't functioning correctly and since our website it written in Perl none of our site scripts will run. In almost 8 years of working with Perl I've never had any issues arise after installing a perl module so I have no idea how to even start troubleshooting. The error I see when trying to compile any of our Perl scripts is below. I'd appreciate any advice on what might be wrong and steps on how I can go about resolve it. Thanks in advance for your help! Attribute (+type_constraint) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::String has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4fd5c4)', 'Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'undef', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 10 require MooseX/AttributeHelpers/String.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 25 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_per

    Read the article

  • SSRS 2008: is it possible to make a report parameter NOT query-based for some linked report?

    - by Stefan Mohr
    I suspect the answer is no, but here goes.. I'm using the WebForms Report Viewer on a public-facing website to allow users to report on themselves or their users (if the user is an admin user). A report has a parameter called Users where an admin can pick a user from the list and generate a report from it. Mundane users can also view this report, but I programmatically create a linked report for each user and set the UserID value to their ID so they can only view themselves. This works well except that the UserID parameter is query-based, and not every user is visible in the list using default settings (the user list is based off date range parameters can provide, and only users we consider 'active' during the date range are visible). This is blowing up for mundane users that are not active for the default date range (which is the previous month). I suspect the flow of execution is something like this: Report loads with default parameters The linked report rules are now applied and the value of the UserID is overridden with the ID in the linked report UserID field is now hidden to prevent the user from changing it SSRS can't find the UserID default value in the query results (that I didn't even want it to run) so it displays an error The 'UserID' parameter is missing a value Through some testing I've found a perfect correlation between users not inside the default date range and users who can't view the report. Can anyone suggest a way to make the report usable for those users that aren't in the default list? The reports are created programmatically so I do have a fair bit of control over the situation. I would love to simply be able to mark a parameter in a linked report as no longer being query-based, but those properties are all read-only. I really, really don't want to have to create duplicate reports to accommodate these users but I'm at a bit of a loss right now. Any suggestions are greatly appreciated!

    Read the article

  • Youtube API - How to limit results for pagination?

    - by worchyld
    I want to grab a user's uploads (ie: BBC) and limit the output to 10 per page. Whilst I can use the following URL: http://gdata.youtube.com/feeds/api/users/bbc/uploads/?start-index=1&max-results=10 The above works okay. I want to use the query method instead: The Zend Framework docs: http://framework.zend.com/manual/en/zend.gdata.youtube.html State that I can retrieve videos uploaded by a user, but ideally I want to use the query method to limit the results for a pagination. The query method is on the Zend framework docs (same page as before under the title 'Searching for videos by metadata') and is similar to this: [code] $yt = new Zend_Gdata_YouTube(); $query = $yt-newVideoQuery(); $query-setTime('today'); $query-setMaxResults(10); $videoFeed = $yt-getUserUploads( NULL, $query ); // Output print ''; foreach($videoFeed as $video): print '' . $video-title . ''; endforeach; print ''; [/code] The problem is I can't do $query-setUser('bbc'). I tried setAuthor but this returns a totally different result. Ideally, I want to use the query method to grab the results in a paginated fashion. How do I use the $query method to set my limits for pagination? Thanks.

    Read the article

  • problem with routing/T4MVC Url.Action()

    - by VinnyG
    I have these 2 routes : routes.MapRoute("Agenda", ConfigurationManager.AppSettings["eventsUrl"] + "/{year}/{month}", MVC.Events.Index(), new { year = DateTime.Now.Year, month = DateTime.Now.Month }); routes.MapRoute("AgendaDetail", ConfigurationManager.AppSettings["eventsUrl"] + "/{year}/{month}/{day}", MVC.Events.Detail(), new { year = DateTime.Now.Year, month = DateTime.Now.Month, day = DateTime.Now.Day }); And it work perfectly with this code : <a href="<%= Url.Action(MVC.Events.Detail(Model.EventsModel.PreviousDay.Year, Model.EventsModel.PreviousDay.Month, Model.EventsModel.PreviousDay.Day))%>" title="<%= Model.EventsModel.PreviousDay.ToShortDateString() %>"><img src="<%= Links.Content.images.contenu.calendrier.grand.mois_precedent_png %>" alt="événement précédent" /></a> Except when I get to do the link to today, if it's today, il will point only to www.myurl.com/agenda, witch is the value of CnfigurationManager.AppSettings["eventsUrl"]. What am I doing wrong? It's like if it's today, it point bak to the default agenda... Thanks for the help!

    Read the article

  • Java Appengine APPSTATS causing java out of memory error

    - by aloo
    I have several servlets in my java appengine app that do in memory sorting and take on the order of seconds to complete. These complete error free. However, I recently enabled appstats for appengine and started receiving the following error: java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Unknown Source) at java.lang.AbstractStringBuilder.expandCapacity(Unknown Source) at java.lang.AbstractStringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at java.lang.StringBuilder.append(Unknown Source) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.write(TextFormat.java:344) at com.google.appengine.repackaged.com.google.protobuf.TextFormat$TextGenerator.print(TextFormat.java:332) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printUnknownFields(TextFormat.java:249) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.print(TextFormat.java:47) at com.google.appengine.repackaged.com.google.protobuf.TextFormat.printToString(TextFormat.java:73) at com.google.appengine.tools.appstats.Recorder.makeSummary(Recorder.java:157) at com.google.appengine.tools.appstats.Recorder.makeSyncCall(Recorder.java:239) at com.google.apphosting.api.ApiProxy.makeSyncCall(ApiProxy.java:98) at com.google.appengine.api.datastore.DatastoreApiHelper.makeSyncCall(DatastoreApiHelper.java:54) at com.google.appengine.api.datastore.PreparedQueryImpl.runQuery(PreparedQueryImpl.java:127) at com.google.appengine.api.datastore.PreparedQueryImpl.asQueryResultList(PreparedQueryImpl.java:81) at org.datanucleus.store.appengine.query.DatastoreQuery.fulfillEntityQuery(DatastoreQuery.java:379) at org.datanucleus.store.appengine.query.DatastoreQuery.executeQuery(DatastoreQuery.java:289) at org.datanucleus.store.appengine.query.DatastoreQuery.performExecute(DatastoreQuery.java:239) at org.datanucleus.store.appengine.query.JDOQLQuery.performExecute(JDOQLQuery.java:89) at org.datanucleus.store.query.Query.executeQuery(Query.java:1489) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1371) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at com.poo.pooserver.dataaccess.DataAccessHelper.getPooStream(DataAccessHelper.java:204) at com.poo.pooserver.GetPooStreamServlet.doPost(GetPooStreamServlet.java:58) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.tools.appstats.AppstatsFilter.doFilter(AppstatsFilter.java:92) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.Net MVC Keeping action parameters between postbacks

    - by Matt
    Say I have a page that display search results. I search for stackoverflow and it returns 5000 results, 10 per page. Now I find myself doing this when building links on that page: <%=Html.ActionLink("Page 1", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Page 2", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Page 3", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Next", "Search", new { query=ViewData["query"], page etc..%> I dont like this, I have to build my links with careful consideration to what was posted previously etc.. What I'd like to do is <%=Html.BuildActionLinkUsingCurrentActionPostData ("Next", "Search", new { Page = 1}); where the anonymous dictionary overrides anything currently set by previous action. Essentially I care about what the previous action parameters were, because I want to reuse, it sounds simple, but start adding sort and loads of advance search options and it starts getting messy. Im probably missing something obvious

    Read the article

  • JIRA JQL searching by date - is there a way of getting Today() (Date) instead of Now() (DateTime)

    - by Shevek
    I am trying to create some Issue Filters in JIRA based on CreateDate. The only date/time function I can find is Now() and searches relative to that, i.e. "-1d", "-4d" etc. The only problem with this is that Now() is time specific so there is no way of getting a particular day's created issues. i.e. Created < Now() AND Created >= "-1d" when run at 2pm today will show all issues created from 2pm yesterday to 2pm today when run at 9am tomorrow will show all issues created from 9am today to 9am tomorrow What I want is to be able to search for all issues created from 00:00 to 23:59 on any day. Is this possible?

    Read the article

  • Python MySQLdb placeholders syntax

    - by ensnare
    I'd like to use placeholders as seen in this example: cursor.execute (""" UPDATE animal SET name = %s WHERE name = %s """, ("snake", "turtle")) Except I'd like to have the query be its own variable as I need to insert a query into multiple databases, as in: query = """UPDATE animal SET name = %s WHERE name = %s """, ("snake", "turtle")) cursor.execute(query) cursor2.execute(query) cursor3.execute(query) What would be the proper syntax for doing something like this?

    Read the article

  • What is the worst C#/.NET gotcha?

    - by MusiGenesis
    This question is similar to this one, but focused on C# and .NET. I was recently working with a DateTime object, and wrote something like this: DateTime dt = DateTime.Now; dt.AddDays(1); return dt; // still today's date! WTF? The intellisense documentation for AddDays says it adds a day to the date, which it doesn't - it actually returns a date with a day added to it, so you have to write it like: DateTime dt = DateTime.Now; dt = dt.AddDays(1); return dt; // tomorrow's date This one has bitten me a number of times before, so I thought it would be useful to catalog the worst C# gotchas.

    Read the article

  • How do I get the DateTime that a page was last published in Sitefinity?

    - by travis
    Here is what I have: Dim cmsManager As New Telerik.Cms.CmsManager() Dim currentNode As Telerik.Cms.Web.CmsSiteMapNode = CType(SiteMap.CurrentNode, Telerik.Cms.Web.CmsSiteMapNode) Dim currentPage As Telerik.Cms.ICmsPage = currentNode.GetCmsPage() Dim currentPageId As Guid = currentPage.ID Dim pageFromDb As Telerik.Cms.IPage = cmsManager.GetPage(currentPageId) Me.LastUpdateDate = pageFromDb.DateModified Unfortunately .DateModified returns the last time that a page was edited instead of when it was last published. I've been looking through the documentation but I haven't been able to fins any corresponding properties.

    Read the article

  • MembershipProvider user id

    - by niao
    Greetings, I wrote a custom MembershipProvider for my asp.net mvc application. I get the user as follows: public override MembershipUser GetUser(string username, bool userIsOnline) { using (CPersistanceManager pm = new CPersistanceManager()) { pm.EnsureConnectionOpen(); MembershipUser membershipUser = null; COperator oper = pm.OperatorRepository.Get(username); membershipUser = new MembershipUser(ApplicationName, oper.Username, oper.ROWGUID, oper.Email, string.Empty, string.Empty, oper.IsActive, false, DateTime.Today, DateTime.Today, DateTime.Today, DateTime.Today, DateTime.Today ); return membershipUser; } } how can I then retrieve logged user id (rowguid) in any controller?

    Read the article

  • 'Invalid column name [ColumnName]' on a nested linq query.

    - by Joe
    I've got the following query: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault().Timestamp}) results in: SqlException: Invalid column name 'FieldAID' x 5 SqlException: Invalid column name 'FieldBID' x 5 SqlException: Invalid column name 'FieldCID' x 1 I've worked out it has to do with the last query to Timestamp because this works: ATable .GroupBy(x=> new {FieldA = x.FieldAID, FieldB = x.FieldBID, FieldC = x.FieldCID}) .Select(x=>new {FieldA = x.Key.FieldA, ..., last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) The query has been simplified. The purpose is to group by a set of variables and then show the last time this grouping occured in the db. I'm using Linqpad 4 to generate these results so the Timestamp gives me a string whereas FirstOrDefault gives me the whole object which isn't ideal. Update On further testing I've noticed that the number and type of SQLException is related to the class created in the groupby clause. So, ATable .GroupBy(x=> new {FieldA = x.FieldAID}) .Select(x=>new {FieldA = x.Key.FieldA, last_seen = x.OrderByDescending(y=>y.Timestamp).FirstOrDefault()}) results in SqlException: Invalid column name 'FieldAID' x 5

    Read the article

  • MS-Access: What could cause one form with a join query to load right and another not?

    - by Daniel Straight
    Form1 Form1 is bound to Table1. Table1 has an ID field. Form2 Form2 is bound to Table2 joined to Table1 on Table2.Table1_ID=Table1.ID Here is the SQL (generated by Access): SELECT Table2.*, Table1.[FirstFieldINeed], Table1.[SecondFieldINeed], Table1.[ThirdFieldINeed] FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.[Table1_ID]; Form2 is opened with this code in Form1: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form2", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form1", acSaveYes And when loaded runs: Me.[Table1_ID] = Me.OpenArgs When Form2 is loaded, fields bound to columns from Table1 show up correctly. Form3 Form3 is bound to Table3 joined to Table2 on Table3.Table2_ID=Table2.ID Here is the SQL (generated by Access): SELECT Table3.*, Table2.[FirstFieldINeed], Table2.[SecondFieldINeed] FROM Table2 INNER JOIN Table3 ON Table2.ID = Table3.[Table2_ID]; Form3 is opened with this code in Form2: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form3", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form2", acSaveYes And when loaded runs: Me.[Table2_ID] = Me.OpenArgs When Form3 is loaded, fields bound to columns from Table2 do not show up correctly. WHY? UPDATES I tried making the join query into a separate query and using that as my record source, but it made no difference at all. If I go to the query for Form3 and view it in datasheet view, I can see that the information that should be pulled into the form is there. It just isn't showing up on the form.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >