Search Results

Search found 13011 results on 521 pages for 'catch block'.

Page 373/521 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

  • C# PInvoke VerQueryValue returns back OutOfMemoryException?

    - by Bopha
    Hi, Below is the code sample which I got from online resource but it's suppose to work with fullframework, but when I try to build it using C# smart device, it throws exception saying it's out of memory. Does anybody know how can I fix it to use on compact? the out of memory exception when I make the second call to VerQueryValue which is the last one. thanks, [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] buffer, string subblock, out IntPtr blockbuffer, out uint len); [DllImport("coredll.dll")] public static extern bool VerQueryValue(byte[] pBlock, string pSubBlock, out string pValue, out uint len); // private static void GetAssemblyVersion() { string filename = @"\Windows\MyLibrary.dll"; if (File.Exists(filename)) { try { int handle = 0; Int32 size = 0; size = GetFileVersionInfoSize(filename, out handle); if (size > 0) { bool retValue; byte[] buffer = new byte[size]; retValue = GetFileVersionInfo(filename, handle, size, buffer); if (retValue == true) { bool success = false; IntPtr blockbuffer = IntPtr.Zero; uint len = 0; //success = VerQueryValue(buffer, "\\", out blockbuffer, out len); success = VerQueryValue(buffer, @"\VarFileInfo\Translation", out blockbuffer, out len); if(success) { int p = (int)blockbuffer; //Reads a 16-bit signed integer from unmanaged memory int j = Marshal.ReadInt16((IntPtr)p); p += 2; //Reads a 16-bit signed integer from unmanaged memory int k = Marshal.ReadInt16((IntPtr)p); string sb = string.Format("{0:X4}{1:X4}", j, k); string spv = @"\StringFileInfo\" + sb + @"\ProductVersion"; string versionInfo; VerQueryValue(buffer, spv, out versionInfo, out len); } } } } catch (Exception err) { string error = err.Message; } } }

    Read the article

  • CSS compilers and converting IE hacks to conditional css

    - by xckpd7
    Skip to bottom for question, but first, a little context. So I have been looking into CSS compilers (like Sass & Less) for a while, and have been really interested in them, not because they help me understand anything easier (I've been doing css for a couple of years now) but rather they cut down on cruft and help me see things easier. I recently have been looking into reliably implementing inline-block (and clearfix), which require lots of extraneous code & hacks. Now according to all the authorities in the field, I shouldn't put IE hacks in the same page I do my CSS in, I should make them conditional. But for me that is a really big hassle to go through and manage all this additional code, which is why I really like things like Less. Instead of applying unsemantic classes, you specify a mixin and apply it once, and you're all set. So I guess I got a little of the track (I wanted to explain my points) but bascially, I'm at the point where these CSS compilers are very useful for me, and allow me to abstract a lot of the cruft away, and reliably apply them once and then just compile it. I would like to have a way to be able to compile IE specific styles into their own conditional files (ala Less / Sass) so I don't have to deal with managing 2 files for no reason. Does anything like a script/applcation that runs and can make underscore / star hacks apart of their own file exist?

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • Which of these is pythonic? and Pythonic vs. Speed

    - by Kashyap Nadig
    Hi! I'm new to python and just wrote this module level function: def _interval(patt): """ Converts a string pattern of the form '1y 42d 14h56m' to a timedelta object. y - years (365 days), M - months (30 days), w - weeks, d - days, h - hours, m - minutes, s - seconds""" m = _re.findall(r'([+-]?\d*(?:\.\d+)?)([yMwdhms])', patt) args = {'weeks': 0.0, 'days': 0.0, 'hours': 0.0, 'minutes': 0.0, 'seconds': 0.0} for (n,q) in m: if q=='y': args['days'] += float(n)*365 elif q=='M': args['days'] += float(n)*30 elif q=='w': args['weeks'] += float(n) elif q=='d': args['days'] += float(n) elif q=='h': args['hours'] += float(n) elif q=='m': args['minutes'] += float(n) elif q=='s': args['seconds'] += float(n) return _dt.timedelta(**args) My issue is with the for loop here i.e the long if elif block, and was wondering if there is a more pythonic way of doing it. So I re-wrote the function as: def _interval2(patt): m = _re.findall(r'([+-]?\d*(?:\.\d+)?)([yMwdhms])', patt) args = {'weeks': 0.0, 'days': 0.0, 'hours': 0.0, 'minutes': 0.0, 'seconds': 0.0} argsmap = {'y': ('days', lambda x: float(x)*365), 'M': ('days', lambda x: float(x)*30), 'w': ('weeks', lambda x: float(x)), 'd': ('days', lambda x: float(x)), 'h': ('hours', lambda x: float(x)), 'm': ('minutes', lambda x: float(x)), 's': ('seconds', lambda x: float(x))} for (n,q) in m: args[argsmap[q][0]] += argsmap[q][1](n) return _dt.timedelta(**args) I tested the execution times of both the codes using timeit module and found that the second one took about 5-6 seconds longer (for the default number of repeats). So my question is: 1. Which code is considered more pythonic? 2. Is there still a more pythonic was of writing this function? 3. What about the trade-offs between pythonicity and other aspects (like speed in this case) of programming? p.s. I kinda have an OCD for elegant code. EDITED _interval2 after seeing this answer: argsmap = {'y': ('days', 365), 'M': ('days', 30), 'w': ('weeks', 1), 'd': ('days', 1), 'h': ('hours', 1), 'm': ('minutes', 1), 's': ('seconds', 1)} for (n,q) in m: args[argsmap[q][0]] += float(n)*argsmap[q][1]

    Read the article

  • Facebook: Unable to get users' birthday

    - by Sarfraz
    Hello All, I have been trying hard for the last 5 hours or so to get the facebook users' birthday through facebook connect but I have not managed to do so. The same code worked previously and still works on other sites made earlier (old ones) but does not seem to work again. Here is the code: require_once ('./connect/facebook.php'); $facebook = new Facebook("myapi", "mysecret"); $fbuser = $facebook->get_loggedin_user(); if ($fbuser) { $_SESSION['custid'] = $fbuser; $fb_look_fields = array( 'name', 'first_name', 'last_name', 'name', 'timezone', 'name', 'sex', 'birthday', 'birthday_date', 'current_location', 'proxied_email', 'affiliations' ); try { $user_data = $facebook->api_client->users_getInfo($fbuser, $fb_look_fields); } catch (Exception $e) { echo $e->getMessage(); } # for birth date $fb_birthday = $user_data[0]['birthday']; if (!$fb_birthday) { $fql_result_xml = $facebook->api_client->fql_query("SELECT birthday FROM standard_user_info WHERE uid = " . $fbuser); $fb_birthday = $fql_result_xml['birthday']; } echo $fb_birthday; The birthday is always empty even though i have set it to visible to all. I am using new facebook php client I even tried getting info from standard table but without any luck I would really appreciate your help and ideas. Thanks

    Read the article

  • Migrating to web application with AjaxControlToolkit

    - by Chris
    Hi All, We are currently migrating our ASP.NET website to a web application in Visual Studio 2008. Most of the process has been fairly straight forward, but I have hit one block that is driving me a bit nuts. We are using the AjaxControlToolkit for some functionality, specifically an AutoControlExtender. When this is run locally through VS's development server the extender (dropdown) does not render after the service returns the resultset. However if I deploy the migrated solution to our UAT server the extender functions correctly. I have ensured the Ajax Control Toolkit is properly installed locally on my dev machine (and the dll available in the bin directory), and using debugging have ensured the service is called correctly and runs through without error (which it does). The web application was taken from a server running IIS7. Can anyone confirm if Visual Studio 2008 development server requires a different configuration to IIS 7 (as I believe IIS 6 requires a different configuration to IIS 7), and if there is a resource that provides more info? My own searches have turned up very little in this area. On the other hand, if I am looking in the wrong area any other tips would be appreciated. Thanks Chris

    Read the article

  • Boost.MultiIndex: Are there way to share object between two processes?

    - by Arman
    Hello, I have a Boost.MultiIndex big array about 10Gb. In order to reduce the reading I thought there should be a way to keep the data in the memory and another client programs will be able to read and analyse it. What is the proper way to organize it? The array looks like: struct particleID { int ID;// real ID for particle from Gadget2 file "ID" block unsigned int IDf;// postition in the file particleID(int id,const unsigned int idf):ID(id),IDf(idf){} bool operator<(const particleID& p)const { return ID<p.ID;} unsigned int getByGID()const {return (ID&0x0FFF);}; }; struct ID{}; struct IDf{}; struct IDg{}; typedef multi_index_container< particleID, indexed_by< ordered_unique< tag<IDf>, BOOST_MULTI_INDEX_MEMBER(particleID,unsigned int,IDf)>, ordered_non_unique< tag<ID>,BOOST_MULTI_INDEX_MEMBER(particleID,int,ID)>, ordered_non_unique< tag<IDg>,BOOST_MULTI_INDEX_CONST_MEM_FUN(particleID,unsigned int,getByGID)> > > particlesID_set; Any ideas are welcome. kind regards Arman.

    Read the article

  • Entity Framework with ASP.NET MVC. Updating entity problem

    - by Kitaly
    Hi people. I'm trying to update an entity and its related entities as well. For instance, I have a class Car with a property Category and I want to change its Category. So, I have the following methods in the Controller: public ActionResult Edit(int id) { var categories = context.Categories.ToList(); ViewData["categories"] = new SelectList(categories, "Id", "Name"); var car = context.Cars.Where(c => c.Id == id).First(); return PartialView("Form", car); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Car car) { var category = context.Categories.Where(c => c.Id == car.Category.Id).First(); car.Category = category; context.UpdateCar(car); context.SaveChanges(); return RedirectToAction("Index"); } The UpdateCar method, in ObjectContext class, follows: public void UpdateCar(Car car) { var attachedCar = Cars.Where(c => c.Id == car.Id).First(); ApplyItemUpdates(attachedCar, car); } private void ApplyItemUpdates(EntityObject originalItem, EntityObject updatedItem) { try { ApplyPropertyChanges(originalItem.EntityKey.EntitySetName, updatedItem); ApplyReferencePropertyChanges(updatedItem, originalItem); } catch (InvalidOperationException ex) { Console.WriteLine(ex.ToString()); } } public void ApplyReferencePropertyChanges(IEntityWithRelationships newEntity, IEntityWithRelationships oldEntity) { foreach (var relatedEnd in oldEntity.RelationshipManager.GetAllRelatedEnds()) { var oldRef = relatedEnd as EntityReference; if (oldRef != null) { var newRef = newEntity.RelationshipManager.GetRelatedEnd(oldRef.RelationshipName, oldRef.TargetRoleName) as EntityReference; oldRef.EntityKey = newRef.EntityKey; } } } The problem is that when I set the Category property after the POST in my controller, the entity state changes to Added instead of remaining as Detached. How can I update one-to-one relationship with Entity Framework and ASP.NET MVC without setting all the properties, one by one like this post?

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • ASP.NET and VB.NET OleDbConnection Problem

    - by Matt
    I'm working on an ASP.NET website where I am using an asp:repeater with paging done through a VB.NET code-behind file. I'm having trouble with the database connection though. As far as I can tell, the paging is working, but I can't get the data to be certain. The database is a Microsoft Access database. The function that should be accessing the database is: Dim pagedData As New PagedDataSource Sub Page_Load(ByVal obj As Object, ByVal e As EventArgs) doPaging() End Sub Function getTheData() As DataTable Dim DS As New DataSet() Dim strConnect As New OleDbConnection("Provider = Microsoft.Jet.OLEDB.4.0;Data Source=App_Data/ArtDatabase.mdb") Dim objOleDBAdapter As New OleDbDataAdapter("SELECT ArtID, FileLocation, Title, UserName, ArtDate FROM Art ORDER BY Art.ArtDate DESC", strConnect) objOleDBAdapter.Fill(DS, "Art") Return DS.Tables("Art").Copy End Function Sub doPaging() pagedData.DataSource = getTheData().DefaultView pagedData.AllowPaging = True pagedData.PageSize = 2 Try pagedData.CurrentPageIndex = Int32.Parse(Request.QueryString("Page")).ToString() Catch ex As Exception pagedData.CurrentPageIndex = 0 End Try btnPrev.Visible = (Not pagedData.IsFirstPage) btnNext.Visible = (Not pagedData.IsLastPage) pageNumber.Text = (pagedData.CurrentPageIndex + 1) & " of " & pagedData.PageCount ArtRepeater.DataSource = pagedData ArtRepeater.DataBind() End Sub The ASP.NET is: <asp:Repeater ID="ArtRepeater" runat="server"> <HeaderTemplate> <h2>Items in Selected Category:</h2> </HeaderTemplate> <ItemTemplate> <li> <asp:HyperLink runat="server" ID="HyperLink" NavigateUrl='<%# Eval("ArtID", "ArtPiece.aspx?ArtID={0}") %>'> <img src="<%# Eval("FileLocation") %>" alt="<%# DataBinder.Eval(Container.DataItem, "Title") %>t"/> <br /> <%# DataBinder.Eval(Container.DataItem, "Title") %> </asp:HyperLink> </li> </ItemTemplate> </asp:Repeater>

    Read the article

  • nHibernate Domain Model and Mapping Files in Separate Projects

    - by Blake Blackwell
    Is there a way to separate out the domain objects and mapping files into two separate projects? I would like to create one project called MyCompany.MyProduct.Core that contains my domain model, and another project that is called MyCompany.MYProduct.Data.Oracle that contains my Oracle data mappings. However, when I try to unit test this I get the following error message: Named query 'GetClients' not found. Here is my mapping file: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyCompany.MyProduct.Core" namespace="MyCompany.MyProduct.Core" > <class name="MyCompany.MyProduct.Core.Client" table="MY_CLIENT" lazy="false"> <id name="ClientId" column="ClientId"></id> <property name="ClientName" column="ClientName" /> <loader query-ref="GetClients"/> </class> <sql-query name="GetClients" callable="true"> <return class="Client" /> call procedure MyPackage.GetClients(:int_SummitGroupId) </sql-query> </hibernate-mapping> Here is my unit test: try { var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly( typeof( Client ).Assembly ); ISessionFactory sessionFactory = cfg.BuildSessionFactory(); IStatelessSession session = sessionFactory.OpenStatelessSession(); IQuery query = session.GetNamedQuery( "GetClients" ); query.SetParameter( "int_SummitGroupId", 3173 ); IList<Client> clients = query.List<Client>(); Assert.AreNotEqual( 0, clients.Count ); } catch( Exception ex ) { throw ex; } I think I may be improperly referencing the assembly, because if I do put the domain model object in the MyComapny.MyProduct.Data.Oracle class it works. Only when I separate out in to two projects do I run into this problem.

    Read the article

  • Difficulty setting ArrayList to java.sql.Blob to save in DB using hibernate

    - by me_here
    I'm trying to save a java ArrayList in a database (H2) by setting it as a blob, for retrieval later. If this is a bad approach, please say - I haven't been able to find much information on this area. I have a column of type Blob in the database, and Hibernate maps to this with java.sql.Blob. The code I'm struggling with is: Drawings drawing = new Drawings(); try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = null; oos = new ObjectOutputStream(bos); oos.writeObject(plan.drawingPane21.pointList); byte[] buff = bos.toByteArray(); Blob drawingBlob = null; drawingBlob.setBytes(0, buff); drawing.setDrawingObject(drawingBlob); } catch (Exception e){ System.err.println(e); } The object I'm trying to save into a blob (plan.drawingPane21.pointList) is of type ArrayList<DrawingDot>, DrawingDot being a custom class implementing Serializable. My code is failing on the line drawingBlob.setBytes(0, buff); with a NullPointerException. Help appreciated.

    Read the article

  • Help with editing data in entity framework.

    - by Levelbit
    Title of this question is simple because there is no an easy explanation for what I'm trying to ask you. I hope you'll understand in the end :). I have to tables in my database: Company and Location (relationship:one to many) and corresponding entity sets. In my wpf application I have some datagrid which I want to fill with locations and to be able to edit every row in separate window as some form of details view (so I don't want to edit my data in datagrid). I did this by accessing Location entity from selected row and creating a new Location entity and then I copy properties from original entity to newly created. Something like cloning the object. After editing if I press OK changed data is copied to original object back, and if I press Cancel nothing happens. Of course, you probably thinking I could use NoTracking option and AttachAsModified method as was mentioned as solution in some earlier questions(see:http://stackoverflow.com/questions/803022/changing-entities-in-the-entityframework) by Alex James, but lets say I had some problems about that and I have my own reasons for doing this. Finally, because navigation property(Company) of my location entity is assigned to newly created location object(during cloning) from some reason in object context additional object as copy of object I want to edit from datagrid is created without my will(similar problem :http://blogs.msdn.com/alexj/archive/2009/12/02/tip-47-how-fix-up-can-make-it-hard-to-change-relationships.aspx). So, when I do ObjectContext.SaveChanges it inserts additional row of data into my database table Location, the same as the one i wanted to edit. I read about sth like this, but I don't quite understand why is that and how to block or override this. I hope I was clear as I could. Please, solutions or some other ideas.

    Read the article

  • Decrypting a string in C# which was encrypted with openssl in php 5.3.2

    - by panny
    maybe someone can clear me up. I have been surfing on this a while now. I used openssl from console to create a root certificate for me (privatekey.pem, publickey.pem, mycert.pem, mycertprivatekey.pfx). See the end of this text on how. The problem is still to get a string encrypted on the PHP side to be decrypted on the C# side with RSACryptoServiceProvider. Any ideas? PHP side I used the publickey.pem to read it into php: $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate"

    Read the article

  • Event Handling for MFC Dialog

    - by Maksud
    This is my second question of the day, pardon me. I am writing a wrapper library to communicate with a scanner device. The source code was in C++ MFC. I am converting it to a plain Dll which will be invoked from C#. So, I am using DllImport in C# to call the wrapper library. Now I am provided with MFC code and the library is a ActiveX Object, at least I think so. class CDpocx : public CWnd { } So in my wrapper library I will have an instance of CDpocx and will call it via C# P/Invoke. But the problem is CDpocx also throws some events which I need to catch. In traditional app, I would just attach an function with it. But How would I attach the events on non MFC class. I have seen something like: BEGIN_EVENTSINK_MAP(CVC60Dlg, CDialog) //{{AFX_EVENTSINK_MAP(CVC60Dlg) ON_EVENT(CVC60Dlg, IDC_DPOCXCTRL1, 1 , OnReadyDpocxctrl1, VTS_NONE) //}}AFX_EVENTSINK_MAP END_EVENTSINK_MAP() OnReadyDpocxctrl1 is the function that handles 1 (Ready) event. How can I gain simmilar function in non MFC class. Regards, Maksud

    Read the article

  • Pentaho - Reporting Tool - Is the .prpt file (report template file) contains datasource information

    - by Yatendra Goel
    I am new Pentaho Reporting Tool. I have the following question: When I created a report using Pentaho Report Designer, it output a report file having .prpt extension. After that I found an example on internet where the following code were used to display the report in html format:| protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ResourceManager manager = new ResourceManager(); manager.registerDefaults(); String reportPath = "file:" + this.getServletContext().getRealPath("sampleReport.prpt"); try { Resource res = manager.createDirectly(new URL(reportPath), MasterReport.class); MasterReport report = (MasterReport) res.getResource(); HtmlReportUtil.createStreamHTML(report, response.getOutputStream()); } catch (Exception e) { e.printStackTrace(); } } And the report got printed successfully. So as we haven't specified any datasource information here, I think that the .prpt file contains that information in it. If that's true than Isn't Jasper is better Reporting tool than Pentaho because when we display Jasper reports, we have to provide datasource details also so in that way our report is flexible and is not bound to any particular database.

    Read the article

  • How do I use an SWT Control to render the content of an SWT/JFace table?

    - by jastram
    I have a JFace TableViewer with an SWT Table, and I would like to custom render the content of some cells. I would like to use an SWT Control to render the cell content. I would prefer to have only one instance of the Control doing the rendering, but if I have to instantiate one for each row, that would be acceptable. Next, the solution MUST be compatible with the ContentProvider/LabelProvider approach (I am using EMF). This means that I cannot use the solution described in Sniplet 126 (http://dev.eclipse.org/viewcvs/index.cgi/org.eclipse.swt.snippets). Next, I though about using custom drawing. But here the catch is, that I have to send individual drawing operations to the graphics context. I was trying to have the Control render the content for me by calling redraw() or print(GC) upon SWT.PaintItem, but that just lead to uncontrollable flickering. At this point, my best guess is to use SWT.PaintItem to do the drawing. This will result in duplicate code, as I already have a Control that can render the content the way I'd like it. I'd like to prevent this redundancy. Any help is appreciated!

    Read the article

  • Add objects in relationship not work using MagicalRecord saveWithBlock

    - by yong ho
    The code to perform a save block: [MagicalRecord saveWithBlock:^(NSManagedObjectContext *localContext) { for (NSDictionary *stockDict in objects) { NSString *name = stockDict[@"name"]; Stock *stock = [Stock MR_createInContext:localContext]; stock.name = name; NSArray *categories = stockDict[@"categories"]; if ([categories count] > 0) { for (NSDictionary *categoryObject in categories) { NSString *categoryId = categoryObject[@"_id"]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"categoryId == %@", categoryId]; NSArray *matches = [StockCategory MR_findAllWithPredicate:predicate inContext:localContext]; NSLog(@"%@", matches); if ([matches count] > 0) { StockCategory *cat = [matches objectAtIndex:0]; [stock addCategoriesObject:cat]; } } } } } completion:^(BOOL success, NSError *error) { }]; The Stock Model: @class StockCategory; @interface Stock : NSManagedObject @property (nonatomic, retain) NSString * name; @property (nonatomic, retain) NSSet *categories; @end @interface Stock (CoreDataGeneratedAccessors) - (void)addCategoriesObject:(StockCategory *)value; - (void)removeCategoriesObject:(StockCategory *)value; - (void)addCategories:(NSSet *)values; - (void)removeCategories:(NSSet *)values; @end The json look like this: [ { "name": "iPad mini ", "categories": [ { "name": "iPad", "_id": "538c655fae9b3e1502fc5c9e", "__v": 0, "createdDate": "2014-06-02T11:51:59.433Z" } ], }, { "name": "iPad Air ", "categories": [ { "name": "iPad", "_id": "538c655fae9b3e1502fc5c9e", "__v": 0, "createdDate": "2014-06-02T11:51:59.433Z" } ], } ] Open the core data pro, You can see only stock with the name of "iPad air" has it's categories saved. I just can't figure out why. You can see in the saveWithBlock part, I first find in the context for the same _id as in json, and then add the category object in the relationship. It's working, but not all of them. Why is that?

    Read the article

  • Using Directives, Namespace and Assembly Reference - all jumbled up with StyleCop!

    - by Jack
    I like to adhere to StyleCop's formatting rules to make code nice and clear, but I've recently had a problem with one of its warnings: All using directives must be placed inside of the namespace. My problem is that I have using directives, an assembly reference (for mocking file deletion), and a namespace to juggle in one of my test classes: using System; using System.IO; using Microsoft.Moles.Framework; using Microsoft.VisualStudio.TestTools.UnitTesting; [assembly: MoledType(typeof(System.IO.File))] namespace MyNamespace { //Some Code } The above allows tests to be run fine - but StyleCop complains about the using directives not being inside the namespace. Putting the usings inside the namespace gives the error that "MoledType" is not recognised. Putting both the usings and the assembly reference inside the namespace gives the error 'assembly' is not a valid attribute location for this declaration. Valid attribute locations for this declaration are 'type'. All attributes in this block will be ignored. It seems I've tried every layout I can but to no avail - either the solution won't build, the mocking won't work or StyleCop complains! Does anyone know a way to set these out so that everything's happy? Or am I going to have to ignore the StyleCop warning in this case?

    Read the article

  • IncludeExceptionDetailInFaults not behaving as thought

    - by pdiddy
    I have this simple test project just to test the IncludeExceptionDetailInFaults behavior. public class Service1 : IService1 { public string GetData(int value) { throw new InvalidCastException("test"); return string.Format("You entered: {0}", value); } } [ServiceContract] public interface IService1 { [OperationContract] string GetData(int value); } In the app.config of the service i have it set to true <serviceDebug includeExceptionDetailInFaults="True" /> On the client side: try { using (var proxy = new ServiceReference1.Service1Client()) Console.WriteLine(proxy.GetData(5)); } catch (Exception ex) { Console.WriteLine(ex.Message); } This is what I thought the behavior was: Setting to includeExceptionDetailInFaults=true would propagate the exception detail to the client. But I'm always getting the CommunicationObjectFaultException. I did try having the FaultContract(typeof(InvalidCastException)) on the contract but same behavior, only getting the CommunicationObjectFaultException. The only way to make it work was to throw new FaultException(new InvalidCastException("test")); But I thought with IncludeExceptionDetailInFaults=true the above was done automatically. Am I missing something?

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

  • Piloting Microsoft Word With Ole in C++ Builder : how to put Word in the foreground.

    - by Getz
    Hi! I've got a code (which works fine) for piloting word with C++ Builder. It's useful for reaching different bookmarks in the document. Variant vNom, vWDocuments, vWDocument, vMSWord, vSignets, vSignet; vNom = WideString("blabla.doc"); try { vMSWord = Variant::GetActiveObject("Word.Application"); } catch(...) { vMSWord = Variant::CreateObject("Word.Application"); } vMSWord.OlePropertySet("Visible", true); vWDocuments = vMSWord.OlePropertyGet("Documents"); vWDocument = vWDocuments.OleFunction("Open", vNom); vSignets = vWDocument.OlePropertyGet("BookMarks"); if (vSignets.OleFunction("Exists", signet)) { vSignet = vSignets.OleFunction("Item", signet); vSignet.OleFunction("Select"); } But once the document is opened, the user can no longer see when an other bookmark has been reached, since the application stays in background. Does anyone know how i can do to make Word displayed in the foreground, or to light-up the document in the taskbar? Thanks!

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >