Search Results

Search found 11573 results on 463 pages for 'store'.

Page 141/463 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • MVC Communication Pattern

    - by Kedu
    This is kind of a follow up question to this http://stackoverflow.com/questions/23743285/model-view-controller-and-callbacks, but I wanted to post it separately, because its kind of a different topic. I'm working on a multiplayer cardgame for the Android platform. I split the project into MVC which fits the needs pretty good, but I'm currently stuck because I can't figure out a good way to communicate between the different parts. I have everything setup and working with the controller being a big state machine, which is called over and over from the gameloop, and calls getter methods from the GUI and the android/network part to get the input. The input itself in the GUI and network is set by inputlisteners that set a local variable which I read in the getter method. So far so good, this is working. But my problem is, the controller has to check every input separately,so if I want to add an input I have to check in which states its valid and call the getter method from all these states. This is not good, and lets the code look pretty ugly, makes additions uncomfortable and adds redundance. So what I've got from the question I mentioned above is that some kind of command or event pattern will fit my needs. What I want to do is to create a shared and threadsafe queue in the controller and instead of calling all these getter methods, I just check the queue for new input and proceed it. On the other side, the GUI and network don't have all these getters, but instead create an event or command and send it to the controller through, for example, observer/observable. Now my problem: I can't figure out a way, for these commands/events to fit a common interface (which the queue can store) and still transport different kind of data (button clicks, cards that are played, the player id the command comes from, synchronization data etc.). If I design the communication as command pattern, I have to stick all the information that is needed to execute the command into it when its created, that's impossible because the GUI or network has no knowledge of all the things the controller needs to execute stuff that needs to be done when for example a card is played. I thought about getting this stuff into the command when executing it. But over all the different commands I have, I would need all the information the controller has, and thus give the command a reference to the controller which would make everything in it public, which is real bad design I guess. So, I could try some kind of event pattern. I have to transport data in the event. So, like the command, I would have an interface, which all events have in common, and can be stored in the shared queue. I could create a big enum with all the different events that a are possible, save one of these enums in the actual event, and build a big switch case for the events, to proceed different stuff for different events. The problem here: I have different data for all the events. But I need a common interface, to store the events in a queue. How do I get the specific data, if I can only access the event through the interface? Even if that wouldn't be a problem, I'm creating another big switch case, which looks ugly, and when i want to add a new event, I have to create the event itself, the case, the enum, and the method that's called with the data. I could of course check the event with the enum and cast it to its type, so I can call event type specific methods that give me the data I need, but that looks like bad design too.

    Read the article

  • Welcome to BlogEngine.NET 2.9 using Microsoft SQL Server

    If you see this post it means that BlogEngine.NET 2.9 is running and the hard part of creating your own blog is done. There is only a few things left to do. Write Permissions To be able to log in to the blog and writing posts, you need to enable write permissions on the App_Data folder. If you’re blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support. You need write permissions on the App_Data folder because all posts, comments, and blog attachments are saved as XML files and placed in the App_Data folder.  If you wish to use a database to to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine wiki to get started. Security When you've got write permissions to the App_Data folder, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change the username and password.  Passwords are hashed by default so if you lose your password, please see the BlogEngine wiki for information on recovery. Configuration and Profile Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET 2.9 is set up to take full advantage of of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this. Themes, Widgets & Extensions One last thing to consider is customizing the look of your blog.  We have a few themes available right out of the box including two fully setup to use our new widget framework.  The widget framework allows drop and drag placement on your side bar as well as editing and configuration right in the widget while you are logged in.  Extensions allow you to extend and customize the behaivor of your blog.  Be sure to check the BlogEngine.NET Gallery at dnbegallery.org as the go-to location for downloading widgets, themes and extensions. On the web You can find BlogEngine.NET on the official website. Here you'll find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.  Again, new themes, widgets and extensions can be downloaded at the BlogEngine.NET gallery. Good luck and happy writing. The BlogEngine.NET team

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • Using 3rd Party JavaScript Plugins Hardwired With &lsquo;document.write&rsquo;

    - by ToStringTheory
    Introduction Have you ever had the need to implement a 3rd party JavaScript plugin, but your needs didn’t fit the model and usage defined by the API or documentation of the plugin?  Recently I ran into this issue when I was trying to implement a web snapshot plugin into our site.  To use their plugin, you had to include a script tag to the plugin on their server with an API key.  The second part of the usage was to include a <script> tag around a function call wherever you wanted a snapshot to appear. The Problem When trying to use the service, the images did not display.  I checked a couple of things and didn’t find anything wrong at first..  It wasn’t until I looked at the function that was called by the inline script did I find the issue – a call to the webservice, followed by a call to ‘document.write’ in its callback.  The solution in which I was trying to implement the plugin happened to be in response to an AJAX call after the document had completely loaded.  After the page has loaded, document.write does nothing. My first thought for a solution was to just cache the script from the service, and edit it do something like a return function or callback that I could use to edit the document from.  However, I quickly discovered that there is no way to cache the script from the service, as it had a hash in the function where it would call the server.  The hash was updated every few seconds/minutes, expiring old hashes.  This meant that I wouldn’t be able to edit the script and upload a new version to my server, as the script would not work after a few minutes from originally getting the script from the service. Solution The solution eluded me until I realized that this was JavaScript I was dealing with.  A language designed so that you could do just about anything to any library, function, or object…  At this point, the solution was simple – take control of the document.write function.  Using a buffer variable, and a simple function call, it is eerily simple to perform: //what would have been output to the document var buffer = ""; //store a reference to the real document.write var dw = document.write; //redefine document.write to store to our buffer document.write = function (str) {buffer += str;} //execute the function containing calls to document.write eval('{function encapsulated in <script></script> tags}'); //restore the original document.write function (just in case) document.write = dw; That’s it.  Instead of using the script tags where I wanted to include a snapshot, I called a function passing in the URL to the page I wanted a snapshot of.  After that last line of code, what would have been output to the document (or not in the case of the ajax call) was instead stored in buffer. Conclusion While the solution itself is simple, coming from a background much more footed in the .Net platform, I believe that this is a prime example of always keeping the language that you are working in in mind.  While this may seem obvious at first, as I KNEW I was in JavaScript, I never thought of taking control of the document.write function because I am more accustomed to the .Net world.  I can’t simply replace the functionality of Console.WriteLine.

    Read the article

  • iphone - direct link to iPhone review form from inside iphone

    - by Mike
    I am trying to link directly to the review link of one of my Apps. I know that it is possible because Appirater did it in the past, but some change in iTunes turned the API down. Appirater uses this URL NSString *templateReviewURL = @"itms-apps://itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?id=APP_ID&onlyLatestVersion=true&pageNumber=0&sortOrdering=1&type=Purple+Software"; where APP_ID is the ID of an application. running this from inside the APP gives me the message Cannot Connect to iTunes Store. This Page talks about another kind of link https://userpub.itunes.apple.com/WebObjects/MZUserPublishing.woa/wa/addUserReview?id=APP_ID&type=Purple+Software and also itms-apps://ax.itunes.apple.com/WebObjects/MZStore.woa/wa/viewContentsUserReviews?type=Purple+Software&id=APP_ID The first one works, but just from the desktop mac. The second gives me the same error as the first... Cannot Connect to iTunes Store. iTunes link maker is not helping too, because it has no tools for iPad links... Do you guys know how to link to an app's review form from inside an app? In case you don't know, what kind of package should I use to dig this? a package sniffer? thanks for any help.

    Read the article

  • SSL authentication error: RemoteCertificateChainErrors on ASP.NET on Ubuntu

    - by Frank Krueger
    I am trying to access Gmail's SMTP service from an ASP.NET MVC site running under Mono 2.4.2.3. But I keep getting this error: System.InvalidOperationException: SSL authentication error: RemoteCertificateChainErrors at System.Net.Mail.SmtpClient.m__3 (System.Object sender, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, SslPolicyErrors sslPolicyErrors) [0x00000] at System.Net.Security.SslStream+c__AnonStorey9.m__9 (System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Int32[] certErrors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.OnRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslStreamBase.RaiseRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.RaiseServerCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] certificateErrors) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process () at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] I have installed certificates using: certmgr -ssl -m smtps://smtp.gmail.com:465 with this output: Mono Certificate Manager - version 2.4.2.3 Manage X.509 certificates and CRL from stores. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. X.509 Certificate v3 Issued from: C=US, O=Equifax, OU=Equifax Secure Certificate Authority Issued to: C=US, O=Google Inc, CN=Google Internet Authority Valid from: 06/08/2009 20:43:27 Valid until: 06/07/2013 19:43:27 *** WARNING: Certificate signature is INVALID *** Import this certificate into the CA store ?yes X.509 Certificate v3 Issued from: C=US, O=Google Inc, CN=Google Internet Authority Issued to: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Valid from: 04/22/2010 20:02:45 Valid until: 04/22/2011 20:12:45 Import this certificate into the AddressBook store ?yes 2 certificates added to the stores. In fact, this worked for a month but mysteriously stopped working on May 5. I installed these new certs today, but I am still getting these errors.

    Read the article

  • Convert NSMutableArray to string and back

    - by Friendlydeveloper
    Hello, in my current project I'm facing the following problem: The app needs to exchange data with my server, which are stored inside a NSMutableArray on the iPhone. The array holds NSString, NSData and CGPoint values. Now, I thought the easiest way to achieve this, was to convert the array into a properly formatted string, send it to my server and store it inside some mySQL database. At this point I'd like to request my data from my server, receive the string, which represents contents of my array and then actually convert it back into a NSMutablArray. So far, I tried something like this: NSString *myArrayString = [myArray description]; Now I send this string to my server and store it inside my mySQL database. That part works really well. However, when I receive the string from my server, I have trouble converting it back into a NSMutableArray. Is there a method, which can easily convert array description back into an array? Unfortunately I couldn't find anything on that so far. Maybe my way of "serializing" the array is wrong right from the start and there is a smarter way to do this. Any help appreciated. Thanks in advance.

    Read the article

  • Create non-persistent cookie with FormsAuthenticationTicket

    - by Marcus
    Hello! I'm having trouble creating a non-persistent cookie using the FormsAuthenticationTicket. I want to store userdata in the ticket, so i can't use FormsAuthentication.SetAuthCookie() or FormsAuthentication.GetAuthCookie() methods. Because of this I need to create the FormsAuthenticationTicket and store it in a HttpCookie. My code looks like this: DateTime expiration = DateTime.Now.AddDays(7); // Create ticket FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(2, user.Email, DateTime.Now, expiration, isPersistent, userData, FormsAuthentication.FormsCookiePath); // Create cookie HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(ticket)); cookie.Path = FormsAuthentication.FormsCookiePath; if (isPersistent) cookie.Expires = expiration; // Add cookie to response HttpContext.Current.Response.Cookies.Add(cookie); When the variable isPersistent is true everything works fine and the cookie is persisted. But when isPersistent is false the cookie seems to be persisted anyway. I sign on in a browser window, closes it and opens the browser again and I am still logged in. How do i set the cookie to be non-persistent? Is a non-persistent cookie the same as a session cookie? Is the cookie information stored in the sessiondata on the server or are the cookie transferred in every request/response to the server? Thanks in advance! /Marcus

    Read the article

  • ExtJs - Set a fixed width in a center layout in a Panel

    - by Benjamin
    Hi all, Using ExtJs. I'm trying to design a main which is divided into three sub panels (a tree list, a grid and a panel). The way it works is that you have a tree list (west) with elements, you click on an element which populates the grid (center), then you click on an element in the grid and that generates the panel (west). My main panel containing the three other ones has been defined with a layout 'border'. Now the problem I face is that the center layout (the grid) has been defined in the code with a fixed width and the west panel as an auto width. But when the interface gets generated, the grid width is suddenly taking all the space in the interface instead of the west panel. The code looks like that: var exploits_details_panel = new Ext.Panel({ region: 'east', autoWidth: true, autoScroll: true, html: 'test' }); var exploit_commands = new Ext.grid.GridPanel({ store: new Ext.data.Store({ autoDestroy: true }), sortable: true, autoWidth: false, region: 'center', stripeRows: true, autoScroll: true, border: true, width: 225, columns: [ {header: 'command', width: 150, sortable: true}, {header: 'date', width: 70, sortable: true} ] }); var exploits_tree = new Ext.tree.TreePanel({ border: true, region: 'west', width: 200, useArrows: true, autoScroll: true, animate: true, containerScroll: true, rootVisible: false, root: {nodeType: 'async'}, dataUrl: '/ui/modules/select/exploits/tree', listeners: { 'click': function(node) { } } }); var exploits = new Ext.Panel({ id: 'beef-configuration-exploits', title: 'Auto-Exploit', region: 'center', split: true, autoScroll: true, layout: { type: 'border', padding: '5', align: 'left' }, items: [exploits_tree, exploit_commands, exploits_details_panel] }); Here 'var exploits' is my main panel containing the three other sub panels. The 'exploits_tree' is the tree list containing some elements. When you click on one of the elements the grid 'exploit_commands' gets populated and when you click in one of the populated elements, the 'exploits_details_panel' panel gets generated. How can I set a fixed width on 'exploit_commands'? Thanks for your time.

    Read the article

  • LINQ to Entities exceptions (ElementAtOrDefault and CompareObjectEqual)

    - by OffApps Cory
    I am working on a shipping platform which will eventually automate shipping through several major carriers. I have a ShipmentsView Usercontrol which displayes a list of Shipments (returned by EntityFramework), and when a user clicks on a shipment item, it spawns a ShipmentEditView and passes the ShipmentID (RecordKey) to that view. I initially wrestled with trying to get the context from the parent (ShipmentsView) and finally gave up resolving to get to it later. I wanted to do this to keep a single instance of the context. anyhow, I now create a new instance of the context in my ShipmentEditViewModel, and query against it for the Shipment record. I know I could just pass the record, but I wanted to use the Ocean Framework that Karl Shifflett wrote and don't want to muck about writing new transition methods. So anyhow, I query and when stepping through, I can see that it returns a record, as soon as execution reached the point where it assigned the query result to the e.Result property, it throws up the following exception depending on the query I used. LINQToEntities Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = From ship In _Context.Shipment _ Where ship.ShipID = e.Argument _ Select ship Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select "LINQ to Entities does not recognize the method 'System.Object.CompareObjectEqual(System.Object, System.Object, Boolean)' method, and this method cannot be translated into a store expression. LINQToEntities via Method calls Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = _Context.Shipment.Where(Function(s) s.ShipID = RecordID) Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select LINQ to Entities does not recognize the method 'SnazzyShippingDAL.Shipment ElementAtOrDefault[Shipment] (System.Linq.IQueryable`1[SnazzyShippingDAL.Shipment], Int32)' method, and this method cannot be translated into a store expression. I have been trying to get this thing to display a record for like three days. i am seriously thinking about going back and re=-engineering it without the MVVM pattern (which I realize I am only starting to learn and understand) if only to make the &$^%ed thing work. Any help will be muchly appreciated. Cory

    Read the article

  • NSTextField autocomplete

    - by Rasmus Styrk
    Does anyone know of any class or lib that can implement autocompletion to an NSTextField? I'am trying to get the standard autocmpletion to work but it is made as a synchronous api. I get my autocompletion words via an api call over the internet. What have i done so far is: - (void)controlTextDidChange:(NSNotification *)obj { if([obj object] == self.searchField) { [self.spinner startAnimation:nil]; [self.wordcompletionStore completeString:self.searchField.stringValue]; if(self.doingAutocomplete) return; else { self.doingAutocomplete = YES; [[[obj userInfo] objectForKey:@"NSFieldEditor"] complete:nil]; } } } When my store is done, i have a delegate that gets called: - (void) completionStore:(WordcompletionStore *)store didFinishWithWords:(NSArray *)arrayOfWords { [self.spinner stopAnimation:nil]; self.completions = arrayOfWords; self.doingAutocomplete = NO; } The code that returns the completion list to the nstextfield is: - (NSArray *)control:(NSControl *)control textView:(NSTextView *)textView completions:(NSArray *)words forPartialWordRange:(NSRange)charRange indexOfSelectedItem:(NSInteger *)index { *index = -1; return self.completions; } My problem is that this will always be 1 request behind and the completion list only shows on every 2nd char the user inputs. I have tried searching google and SO like a mad man but i cant seem to find any solutions.. Any help is much appreciated.

    Read the article

  • iPhone core data problem : referenceData64 only defined for abstract class

    - by occe
    I have an application that downloads/parses a big XML file and store the information using core data (approx. 4000 objects (entities)). The XML is loaded/parsed in a different thread, which has its own NSManagedObjectContext. When trying to save the entities to the persistent store, I sometimes get the following error (about 20%) 2010-03-03 23:41:42.802 xxx[7487:4203] Exception in XML saving 2010-03-03 23:41:42.802 xxx[7487:4203] Description: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! 2010-03-03 23:41:42.803 xxx[7487:4203] Name: NSInvalidArgumentException 2010-03-03 23:41:42.804 xxx[7487:4203] UserInfo: (null) 2010-03-03 23:41:42.805 xxx[7487:4203] Reason: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]! I have a simple integer to keep track of the entities the application creates compared to the insertedObjects property in the NSManagedObjectContext before saving, and when I get the error, these numbers do not match, insertedObjects in the NSManagedObjectContext is missing about 10 entities. I do not know how I should continue to investigate this problem, anyone has any idea how to fix this? Thanks /oscar

    Read the article

  • UIView drawRect: when you draw a line, the rect area will be clear so the previous drawing is gone

    - by snakewa
    It is quite hard to tell so I upload an image to show my problem: http://i42.tinypic.com/2eezamo.jpg Basically in drawRect, I will draw the line from touchesMoved as finger touches and I will call "needsDisplayInRect" for redraw. But I found that the first line is done, the second line will clear the rect part, so some previouse drawing is gone. Here is my implementation: enter code here -(void) drawRect:(CGRect)rect{ //[super drawRect: rect]; CGContextRef context = UIGraphicsGetCurrentContext(); [self drawSquiggle:squiggle at:rect inContext:context]; } - (void)drawSquiggle:(Squiggle *)squiggle at:(CGRect) rect inContext:(CGContextRef)context { CGContextSetBlendMode(context, kCGBlendModeMultiply); UIColor *squiggleColor = squiggle.strokeColor; // get squiggle's color CGColorRef colorRef = [squiggleColor CGColor]; // get the CGColor CGContextSetStrokeColorWithColor(context, colorRef); NSMutableArray *points = [squiggle points]; // get points from squiggle // retrieve the NSValue object and store the value in firstPoint CGPoint firstPoint; // declare a CGPoint [[points objectAtIndex:0] getValue:&firstPoint]; // move to the point CGContextMoveToPoint(context, firstPoint.x, firstPoint.y); // draw a line from each point to the next in order for (int i = 1; i < [points count]; i++) { NSValue *value = [points objectAtIndex:i]; // get the next value CGPoint point; // declare a new point [value getValue:&point]; // store the value in point // draw a line to the new point CGContextAddLineToPoint(context, point.x, point.y); } // end for CGContextStrokePath(context); }

    Read the article

  • How to use HTTP Live Streaming protocol in iPhone SDk 3.0

    - by Pugal Devan
    Hi Guys, i have developed on IPhone application and submitted to App store. But my application got rejected based on below criteria. Thank you for submitting your yyyyyyyy application. We have reviewed your application and have determined that it cannot be posted to the App Store at this time because it is not using the HTTP Live Streaming protocol to broadcast streaming video. HTTP Live Streaming is required when streaming video feeds over the cellular network, in order to have an optimal user experience and utilize cellular best practices. This protocol automatically determines bandwidth available to users and adjusts the bandwidth appropriately, even as bandwidth streams change. This allows you the flexibility to have as many streams as you like, as long as 64 kbps is set as the baseline feed. In my apps i have to stream prerecorded m4v and mp3 files from my server. I used MPMoviePlayerController to stream and play those videos / audio. How to implement the HTTP Live Streaming Protocol in my apps? Also can i get some sample code? Thanks in advance!

    Read the article

  • Flex DataGrid with ComboBox itemRenderer

    - by Jamie Love
    Hi there, I'm going spare trying to figure out the "correct" way to embed a ComboBox inside a Flex (3.4) DataGrid. By Rights (e.g. according to this page http://blog.flexmonkeypatches.com/2008/02/18/simple-datagrid-combobox-as-item-editor-example/) it should be easy, but I can't for the life of me make this work. The difference I have to the example linked above is that my display value (what the user sees) is different to the id value I want to select on and store in my data provider. So what I have is: <mx:DataGridColumn headerText="Type" width="200" dataField="TransactionTypeID" editorDataField="value" textAlign="center" editable="true" rendererIsEditor="true"> <mx:itemRenderer> <mx:Component> <mx:ComboBox dataProvider="{parentDocument.transactionTypesData}"/> </mx:Component> </mx:itemRenderer> </mx:DataGridColumn> Where transactionTypesData has both 'data' and 'label' fields (as per what the ComboBox - why on earth it doesn't provide both a labelField and idField I'll never know). Anyway, the above MXML code doesn't work in two ways: The combo box does not show up with any selected item. After selecting an item, it does not store back that selected item to the datastore. So, has anyone got a similar situation working?

    Read the article

  • Hidden Field losing its Value on postback

    - by Ratan
    I have a ascx page where I am using a hidden field to store the value of a the drop down box as it is generated using a google address finder. My problem is that when I try to store the value directly in the hidden field : hfDdlVerifyID.Value = ddlVerifySS.SelectedValue; in the event of a button click, the value is stored but on postback is lost again. Whereas, if i try to use Scriptmanager to do it, nothing is stored. getBuild.AppendLine("$get('" + hfDdlVerifyID.ClientID + "').value = $get('" + ddlVerifySS.ClientID + ").value;"); ScriptManager.RegisterClientScriptBlock(this.Page, this.GetType(), "storeHidden", getBuild.ToString(), true); // Page.ClientScript.RegisterClientScriptBlock(this.GetType(), "storeHidden", getBuild.ToString(), true); string test = hfDdlVerifyID.Value.ToString(); The ascx page is : <asp:UpdatePanel ID = ddlUpdate runat="server"> <ContentTemplate> <asp:Panel ID="pVerify" runat="server"> <br /> <fieldset> <legend> <asp:Literal ID="lVerify" runat="server" /> </legend> <asp:CheckBox ID ="cbVerify" runat ="server" Text ="Use the value from the following list, (Uncheck to accept address as it is)." Checked ="true" /> <br /> <asp:DropDownList ID="ddlVerifySS" runat="server" onselectedindexchanged="ddlVerifySS_SelectIndexChange" /> <asp:HiddenField id ="hfDdlVerifyID" runat ="server" /> </fieldset> </asp:Panel> </ContentTemplate> </asp:UpdatePanel> <padrap:Button ID ="bVerify" runat ="server" CssClass ="btn" OnClick ="bVerify_Click" Text ="Verify Address" /> <asp:Button ID ="btnSubSite" runat ="server" text ="Save" CssCLass ="btn" OnClick ="save_btn_Click_subSite" onLoad="ddlVerify_Load" />

    Read the article

  • Generated signed X.509 client certificate is invalid (no certificate chain to its CA)

    - by Genady
    I use Bouncy Castle for generation of X.509 client certificates and sing them using a known CA. First I read the CA certificate from the certificate store, generate the client certificate, sign it using the CA. Validation of the certificate is failed doe to the following issue A certificate chain could not be built to a trusted root authority. As I understand this is due to the certificate not being related to the CA. Here is a code sample: public static X509Certificate2 GenerateCertificate(X509Certificate2 caCert, string certSubjectName) { // Generate Certificate var cerKp = kpgen.GenerateKeyPair(); var certName = new X509Name(true,certSubjectName); // subjectName = user var serialNo = BigInteger.ProbablePrime(120, new Random()); X509V3CertificateGenerator gen2 = new X509V3CertificateGenerator(); gen2.SetSerialNumber(serialNo); gen2.SetSubjectDN(certName); gen2.SetIssuerDN(new X509Name(true,caCert.Subject)); gen2.SetNotAfter(DateTime.Now.AddDays(100)); gen2.SetNotBefore(DateTime.Now.Subtract(new TimeSpan(7, 0, 0, 0))); gen2.SetSignatureAlgorithm("SHA1WithRSA"); gen2.SetPublicKey(cerKp.Public); AsymmetricCipherKeyPair akp = DotNetUtilities.GetKeyPair(caCert.PrivateKey); Org.BouncyCastle.X509.X509Certificate newCert = gen2.Generate(caKp.Private); // used for getting a private key X509Certificate2 userCert = ConvertToWindows(newCert,cerKp); if (caCert22.Verify()) // works well for CA { if (userCert.Verify()) // fails for client certificate { return userCert; } } return null; } private static X509Certificate2 ConvertToWindows(Org.BouncyCastle.X509.X509Certificate newCert, AsymmetricCipherKeyPair kp) { string tempStorePwd = "abcd1234"; var tempStoreFile = new FileInfo(Path.GetTempFileName()); try { // store key { var newStore = new Pkcs12Store(); var certEntry = new X509CertificateEntry(newCert); newStore.SetCertificateEntry( newCert.SubjectDN.ToString(), certEntry ); newStore.SetKeyEntry( newCert.SubjectDN.ToString(), new AsymmetricKeyEntry(kp.Private), new[] { certEntry } ); using (var s = tempStoreFile.Create()) { newStore.Save( s, tempStorePwd.ToCharArray(), new SecureRandom(new CryptoApiRandomGenerator()) ); } } // reload key return new X509Certificate2(tempStoreFile.FullName, tempStorePwd); } finally { tempStoreFile.Delete(); } }

    Read the article

  • Handling multiple column data with Java

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time.

    Read the article

  • WCF/MSMQ Transport Security with Certificates

    - by user104295
    Hi there, my goal is to secure the communication between MSMQ Queue Managers – I don’t want unknown clients sending messages to my MSMQ server. I have spent many hours now trying to get Transport security working for the net.msmq binding in WCF, where MSMQ is in Workgroup mode and the client and server do not have Active Directory… so I’m using certificates. I have created a new X.509 certificate, called Kristan and put it into the “Trusted people” store on the server and into the My store of Current User of the client. The error I’m getting is: An error occurred while sending to the queue: Unrecognized error -1072824272 (0xc00e0030).Ensure that MSMQ is installed and running. If you are sending to a local queue, ensure the queue exists with the required access mode and authorization. Using smartsniff, I see that there’s no attempted connection with the remote MSMQ, however, it’s an error probably coming from the local queue manager. The stack trace is: at System.ServiceModel.Channels.MsmqOutputChannel.OnSend(Message message, TimeSpan timeout) at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.OutputChannelBinder.Send(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) The code:- EndpointAddress endpointAddress = new EndpointAddress(new Uri(endPointAddress)); NetMsmqBinding clientBinding = new NetMsmqBinding(); clientBinding.Security.Mode = NetMsmqSecurityMode.Transport; clientBinding.Security.Transport.MsmqAuthenticationMode = MsmqAuthenticationMode.Certificate; clientBinding.Security.Transport.MsmqProtectionLevel = System.Net.Security.ProtectionLevel.Sign; clientBinding.ExactlyOnce = false; clientBinding.UseActiveDirectory = false; // start new var channelFactory = new ChannelFactory<IAsyncImportApi>(clientBinding, endpointAddress); channelFactory.Credentials.ClientCertificate.SetCertificate("CN=Kristan", StoreLocation.CurrentUser, StoreName.My); The queue is flagged as ‘Authenticated’ on the server. I have checked the effect of this and if I turn off all security in the client send, then I get ‘Signature is invalid’ – which is understandable and shows that it’s definitely looking for a sig. Are there are special ports that I need to check are open for cert-based msmq auth? thanks Kris

    Read the article

  • MS SQL 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

  • Dynamics CRM Customer Portal Accelerator Installation

    - by saturdayplace
    (I've posted this question on the codeplex forums too, but have yet to get a response) I've got an on-premise installation of CRM and I'm trying to hook the portal to it. My connection string in web.config: <connectionStrings> <add name="Xrm" connectionString="Authentication Type=AD; Server=http://myserver:myport/MyOrgName; User ID=mydomain\crmwebuser; Password=thepassword" /> </connectionStrings> And my membership provider: <membership defaultProvider="CustomCRMProvider"> <providers> <add connectionStringName="Xrm" applicationName="/" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0" name="CustomCRMProvider" type="System.Web.Security.SqlMembershipProvider" /> </providers> </membership> Now, I'm super new to MS style web development, so please help me if I'm missing something. In Visual Studio 2010, when I go to Project ASP.NET Configuration it launches the Web Site Administration Tool. When I click the Security Tab there, I get the following error: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: An error occurred while attempting to initialize a System.Data.SqlClient.SqlConnection object. The value that was provided for the connection string may be wrong, or it may contain an invalid syntax. Parameter name: connectionString I can't see what I'm doing wrong here. Does the user mydomain\crmwebuser need certain permissions in the SQL database, or somewhere else? edit: On the home page of the Web Site Administration Tool, I have the following: **Application**:/ **Current User Name**:MACHINENAME\USERACCOUNT Which is obviously a different set of credentials than mydomain\crmwebuser. Is this part of the problem?

    Read the article

  • API Failure Sqlite

    - by Joseph Anderson
    I ran the Windows 8 App Certification Kit on my app and it says it will fail because of Sqllite. Am I referencing code incorrectly or can I ignore this problem? Here is the response: Impact if not fixed: Using an API that is not part of the Windows SDK for Windows Store apps violates the Windows Store certification requirements. API __CppXcptFilter in msvcr110.dll is not supported for this application type. sqlite3.dll calls this API. API __clean_type_info_names_internal in msvcr110.dll is not supported for this application type. sqlite3.dll calls this API. API __crtTerminateProcess in msvcr110.dll is not supported for this application type. sqlite3.dll calls this API. API __crtUnhandledException in msvcr110.dll is not supported for this application type. sqlite3.dll calls this API. I am referencing this file: SQLite for Windows Runtime SQLite.WinRT, Version=3.7.14 C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0\ExtensionSDKs\SQLite.WinRT\3.7.14\ In my Windows 8 Metro app using XAML.

    Read the article

  • SQL Server 2008, join or no join?

    - by Patrick
    Just a small question regarding joins. I have a table with around 30 fields and i was thinking about making a second table to store 10 of those fields. Then i would just join them in with the main data. The 10 fields that i was planning to store in a second table does not get queried directly, it's just some settings for the data in the first table. Something like: Table 1 Id Data1 Data2 Data3 etc ... Table 2 Id (same id as table one) Settings1 Settings2 Settings3 Is this a bad solution? Should i just use 1 table? How much performance inpact does it have? All entries in table 1 would also then have an entry in table 2. Small update is in order. Most of the Data fields are of the type varchar and 2 of them are of the type text. How is indexing treated? My plan is to index 2 data fields, email (varchar 50) and author (varchar 20). And yes, all records in Table 1 will have a record in Table 2. Most of the settings fields are of the bit type, around 80%. The rest is a mix between int and varchar. The varchars can be null.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >