Search Results

Search found 85288 results on 3412 pages for 'new bie'.

Page 372/3412 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Blackberry - Exception when sending SMS programmatically

    - by user213199
    Hello all, I am developing a Blackberry application. I am trying to send SMS programmatically to GSM number. I have gone through many queries and answers related to that and finally added the code for that as below. When the code tries to execute sending some text message to the particular mobile number, it doesn't send and throws exception as "blocking operation not permitted on event dispatch thread". So i created a separate background thread where i put the SMS code and running the code now. But still observing the same exception. Could someone please suggest what am i doing wrong (or) how to do that? class DummyFirst extends MainScreen { private Bitmap background; private VerticalFieldManager _container; private VerticalFieldManager mainVerticalManager; private HorizontalFieldManager horizontalFldManager; private BackGroundThread _thread; CustomControl buttonControl1; public DummyFirst() { super(); LabelField appTitle = new LabelField("Dummy App"); setTitle(appTitle); background = Bitmap.getBitmapResource("HomeBack.png"); _container = new VerticalFieldManager(Manager.NO_VERTICAL_SCROLL | Manager.NO_VERTICAL_SCROLLBAR) { protected void paint(Graphics g) { // Instead of these next two lines, draw your bitmap int y = DummyFirst.this.getMainManager() .getVerticalScroll(); g.clear(); g.drawBitmap(0, 0, background.getWidth(), background .getHeight(), background, 0, 0); super.paint(g); } protected void sublayout(int maxWidth, int maxHeight) { int width = background.getWidth(); int height = background.getHeight(); super.sublayout(width, height); setExtent(width, height); } }; mainVerticalManager = new VerticalFieldManager( Manager.NO_VERTICAL_SCROLL | Manager.NO_VERTICAL_SCROLLBAR) { protected void sublayout(int maxWidth, int maxHeight) { int width = background.getWidth(); int height = background.getHeight(); super.sublayout(width, height); setExtent(width, height); } }; HorizontalFieldManager horizontalFldManager = new HorizontalFieldManager(Manager.USE_ALL_WIDTH); buttonControl1 = new CustomControl("Send", ButtonField.CONSUME_CLICK, 83, 15); horizontalFldManager.add(buttonControl1); this.setStatus(horizontalFldManager); FieldListener listner = new FieldListener(); buttonControl1.setChangeListener(listner); _container.add(mainVerticalManager); this.add(_container); } class FieldListener implements FieldChangeListener { public void fieldChanged(Field f, int context) { if (f == buttonControl1) { _thread = new BackGroundThread(); _thread.start(); } } } private class BackGroundThread extends Thread { public BackGroundThread() { /*** initialize parameters in constructor *****/ } public void run() { // UiApplication.getUiApplication().invokeLater(new Runnable() UiApplication.getUiApplication().invokeLater(new Runnable() { public void run() { try { MessageConnection msgConn = (MessageConnection) Connector .open("sms://:0"); Message msg = msgConn .newMessage( MessageConnection.TEXT_MESSAGE); TextMessage txtMsg = (TextMessage) msg; String msgAdr = "sms://+919861348735"; txtMsg.setAddress(msgAdr); txtMsg.setPayloadText("Test Message"); // here exception is thrown msgConn.send(txtMsg); System.out.println("Sending" + " SMS success !!!"); } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); } } // run }); } } public boolean onClose() { System.out.println("close event called, request to be" + " in the backgroud...."); UiApplication.getUiApplication().requestBackground(); return true; } } I resolved this issue by creating a separate thread and then not using Port etc. Here it is: SMSThread smsthread = new SMSThread("Some message",mobNumber); smsthread.start(); class SMSThread extends Thread { Thread myThread; MessageConnection msgConn; String message; String mobilenumber; public SMSThread( String textMsg, String mobileNumber ) { message = textMsg; mobilenumber = mobileNumber; } public void run() { try { msgConn = (MessageConnection) Connector.open("sms://+"+ mobilenumber); TextMessage text = (TextMessage) msgConn.newMessage(MessageConnection.TEXT_MESSAGE); text.setPayloadText(message); msgConn.send(text); msgConn.close(); } catch (Exception e) { System.out.println("Exception: " + e); } } }

    Read the article

  • How to propagate http response code from back-end to client

    - by Manoj Neelapu
    Oracle service bus can be used as for pass through casses. Some use cases require propagating the http-response code back to the caller. http://forums.oracle.com/forums/thread.jspa?messageID=4326052&#4326052 is one such example we will try to accomplish in this tutorial.We will try to demonstrate this feature using Oracle Service Bus (11.1.1.3.0. We will also use commons-logging-1.1.1, httpcomponents-client-4.0.1, httpcomponents-core-4.0.1 for writing the client to demonstrate.First we create a simple JSP which will always set response code to 304.The JSP snippet will look like <%@ page language="java"     contentType="text/xml;     charset=UTF-8"        pageEncoding="UTF-8" %><%      System.out.println("Servlet setting Responsecode=304");    response.setStatus(304);    response.flushBuffer();%>We will now deploy this JSP on weblogic server with URI=http://localhost:7021/reponsecode/For this JSP we will create a simple Any XML BS We will also create proxy service as shown below Once the proxy is created we configure pipeline for the proxy to use route node, which invokes the BS(JSPCaller) created in the first place. So now we will create a error handler for route node and will add a stage. When a HTTP BS sends a request, the JSP sends the response back. If the response code is not 200, then the http BS will consider that as error and the above configured error handler is invoked. We will print $outbound to show the response code sent by the JSP. The next actions. To test this I had create a simple clientimport org.apache.http.Header;import org.apache.http.HttpEntity;import org.apache.http.HttpHost;import org.apache.http.HttpResponse;import org.apache.http.HttpVersion;import org.apache.http.client.methods.HttpGet;import org.apache.http.conn.ClientConnectionManager;import org.apache.http.conn.scheme.PlainSocketFactory;import org.apache.http.conn.scheme.Scheme;import org.apache.http.conn.scheme.SchemeRegistry;import org.apache.http.impl.client.DefaultHttpClient;import org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager;import org.apache.http.params.BasicHttpParams;import org.apache.http.params.HttpParams;import org.apache.http.params.HttpProtocolParams;import org.apache.http.util.EntityUtils;/** * @author MNEELAPU * */public class TestProxy304{    public static void main(String arg[]) throws Exception{     HttpHost target = new HttpHost("localhost", 7021, "http");     // general setup     SchemeRegistry supportedSchemes = new SchemeRegistry();     // Register the "http" protocol scheme, it is required     // by the default operator to look up socket factories.     supportedSchemes.register(new Scheme("http",              PlainSocketFactory.getSocketFactory(), 7021));     // prepare parameters     HttpParams params = new BasicHttpParams();     HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);     HttpProtocolParams.setContentCharset(params, "UTF-8");     HttpProtocolParams.setUseExpectContinue(params, true);     ClientConnectionManager connMgr = new ThreadSafeClientConnManager(params,              supportedSchemes);     DefaultHttpClient httpclient = new DefaultHttpClient(connMgr, params);     HttpGet req = new HttpGet("/HttpResponseCode/ProxyExposed");     System.out.println("executing request to " + target);     HttpResponse rsp = httpclient.execute(target, req);     HttpEntity entity = rsp.getEntity();     System.out.println("----------------------------------------");     System.out.println(rsp.getStatusLine());     Header[] headers = rsp.getAllHeaders();     for (int i = 0; i < headers.length; i++) {         System.out.println(headers[i]);     }     System.out.println("----------------------------------------");     if (entity != null) {         System.out.println(EntityUtils.toString(entity));     }     // When HttpClient instance is no longer needed,      // shut down the connection manager to ensure     // immediate deallocation of all system resources     httpclient.getConnectionManager().shutdown();     }}On compiling and executing this we see the below output in STDOUT which clearly indicates the response code was propagated from Business Service to Proxy serviceexecuting request to http://localhost:7021----------------------------------------HTTP/1.1 304 Not ModifiedDate: Tue, 08 Jun 2010 16:13:42 GMTContent-Type: text/xml; charset=UTF-8X-Powered-By: Servlet/2.5 JSP/2.1----------------------------------------  

    Read the article

  • Career development as a Software Developer without becoming a manager.

    - by albertpascual
    I’m a developer, I like to write new exciting code everyday, my perfect day at work is a day that when I wake up, I know that I have to write some code that I haven’t done before or to use a new framework/language/platform that is unknown to me. The best days in the office is when a project is waiting for me to architect or write. In my 15 years in the development field, I had to in order to get a better salary to manage people, not just to lead developers, to actually manage people. Something that I found out when I get into a management position is that I’m not that good at managing people, and not afraid to say it. I do not enjoy that part of the job, the worse one, takes time away from what I really like. Leading developers and managing people are very different things. I do like teaching and leading developers in a project. Yet most people believe, and is true in most companies, the way to get a better salary is to be promoted to a manager position. In order to advance in your career you need to let go of the everyday writing code and become a supervisor or manager. This is the path for developers after they become senior developers. As you get older and your family grows, the only way to hit your salary requirements is to advance your career to become a manager and get that manager salary. That path is the common in most companies, the most intelligent companies out there, have learned that promoting good developers mean getting a crappy manager and losing a good resource. Now scratch everything I said, because as I previously stated, I don’t see myself going to the office everyday and just managing people until is time to go home. I like to spend hours working in some code to accomplish a task, learning new platforms and languages or patterns to existing languages. Being interrupted every 15 minutes by emails or people stopping by my office to resolve their problems, is not something I could enjoy. All the sudden riding my motorcycle to work one cold morning over the Redlands Canyon and listening to .NET Rocks podcast, Michael “Doc” Norton explaining how to take control of your development career without necessary going to the manager’s track. I know, I should not have headphones under my helmet when riding a motorcycle in California. His conversation with Carl Franklin and Richard Campbell was just confirming everything I have ever did with actually more details and assuring that there are other paths. His method was simple yet most of us, already do many of those steps, Mr. Michael “Doc” Norton believe that it pays off on the long run, that finally companies prefer to pay higher salaries to those developers, yet I would actually think that many companies do not see developers that way, this is not true for bigger companies. However I do believe the value of those developers increase and most of the time, changing companies could increase their salary instead of staying in the same one. In short without even trying to get into the shadow of Mr. Norton and without following the steps in the order; you should love to learn new technologies, and then teach them to other geeks. I personally have learn many technologies and I haven’t stop doing that, I am a professor at UCR where I teach ASP.NET and Silverlight. Mr Norton continues that after than, you want to be involve in the development community, user groups, online forums, open source projects. I personally talk to user groups, I’m very active in forums asking and answering questions as well as for those I got awarded the Microsoft MVP for ASP.NET. After you accomplish all those, you should also expose yourself for what you know and what you do not know, learning a new language will make you humble again as well as extremely happy. There is no better feeling that learning a new language or pattern in your daily job. If you love your job everyday and what you do, I really recommend you to follow Michael’s presentation that he kindly share it on the link below. His confirmation is a refreshing, knowing that my future is not behind a desk where the computer screen is on my right hand side instead of in front of me. Where I don’t have to spent the days filling up performance forms for people and the new platforms that I haven’t been using yet are just at my fingertips. Presentation here. http://www.slideshare.net/LeanDog/take-control-of-your-development-career-michael-doc-norton?from=share_email_logout3 Take Control of Your Development Career Welcome! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] Recovering Post Technical I love to learn I love to teach I love to work in teams I love to write code I really love to write code What about YOU? Do you love your job? Do you love your Employer? Do you love your Boss? What do you love? What do you really love? Take Control Take Control • Get Noticed • Get Together • Get Your Mojo • Get Naked • Get Schooled Get Noticed Get Noticed Know Your Business Get Noticed Get Noticed Understand Management Get Noticed Get Noticed Do Your Existing Job Get Noticed Get Noticed Make Yourself Expendable Get Together Get Together Join a User Group Get Together Help Run a User Group Get Together Start a User Group Get Your Mojo Get Your Mojo Kata Get Your Mojo Koans Get Your Mojo Breakable Toys Get Your Mojo Open Source Get Naked Get Naked Run with Group A Get Naked Do Something Different Get Naked Own Your Mistakes Get Naked Admit You Don’t Know Get Schooled Get Schooled Choose a Mentor Get Schooled Attend Conferences Get Schooled Teach a New Subject Get Started Read These (Again) Take Control of Your Development Career Thank You! Michael “Doc” Norton @DocOnDev http://docondev.blogspot.com/ [email protected] In a short summary, I recommend any developer to check his blog and more important his presentation, I haven’t been lucky enough to watch him live, I’m looking forward the day I have the opportunity. He is giving us hope in the future of developers, when I see some of my geek friends moving to position that in short years they begin to regret, I get more unsure of my future doing what I love. I would say that now is looking at the spectrum of companies that understand and appreciate developers. There are a few there, hopefully with time code sweat shops will start disappearing and being a developer will feed a family of 4. Cheers Al tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/07/career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Queueing Effect.Parallels in Scriptaculous doesn't work

    - by Matthew Robertson
    Each block of animations, grouped in an Effect.Parallel, runs simultaneously. That works fine. Then, I want each of the Effect.Parallels to trigger sequentially, with a delay. The second block doesn't wait its turn. It fires when the function is run. Why?! ///// FIRST BLOCK ///// new Effect.Parallel([ new Effect.Morph... ], { queue: 'front' }); ///// SECOND BLOCK ///// new Effect.Parallel([ Element.toggleClassName($$('#add_comment_button .glyph').first(), 'yay') ], { sync: true, queue: 'end', delay: 1 }); ///// THIRD BLOCK ///// new Effect.Parallel([ new Effect.SlideUp... ], { queue: 'end', delay: 4 });

    Read the article

  • [GoogleMaps] Get GLatLng from GPoint

    - by Jo Asakura
    Hello everybody, I have a google map with my own map type: var currentProjection = new GMercatorProjection(maxLevels + 1); var mapBounds = new GLatLngBounds(new GLatLng(-9, -15), new GLatLng(9, 15)); var custommap = new GMapType(tilelayers, currentProjection, "Some project"); map.addMapType(custommap); map.setCenter(mapBounds.getCenter(), minLevels, custommap); When user clicks on map then context menu appears (singlerightclick event), from context menu user can add markers and I need a GLatLng value to add marker to the map but singlerightclick event contains only GPoint value. I try to use next statement: map.getCurrentMapType().getProjection().fromPixelToLatLng(pointValueFromEvent, map.getZoom()); but it wasn't helpfull (GLatLng value is outside of my map). I think it's because I use my own map type, how can I get GLatLng value to add the marker? Best regards, Alex.

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • oracle clob insertion problem in spring

    - by haluk
    Hi, I want to insert CLOB value into my Oracle database and here is the what I could do. I got this exception while inserting operation "ORA-01461: can bind a LONG value only for insert into a LONG column". Would someone able to tell me what should I do? Thanks. List<Object> listObjects = dao.selectAll("TABLE NAME", new XRowMapper()); String queryX = "INSERT INTO X (A,B,C,D,E,F) VALUES ?,?,?,?,?,XMLTYPE(?))"; OracleLobHandler lobHandler = new OracleLobHandler(); for(Object myObject : listObjects) { dao.create(queryX, new Object[]{ ((X)myObject).getA(), ((X)myObject).getB(), new SqlLobValue (((X)myObject).getC(), lobHandler), ((X)myObject).getD(), ((X)myObject).getE(), ((X)myObject).getF() }, new int[] {Types.VARCHAR,Types.VARCHAR,Types.CLOB,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR}); }

    Read the article

  • ASP.Net MVC Keeping action parameters between postbacks

    - by Matt
    Say I have a page that display search results. I search for stackoverflow and it returns 5000 results, 10 per page. Now I find myself doing this when building links on that page: <%=Html.ActionLink("Page 1", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Page 2", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Page 3", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Next", "Search", new { query=ViewData["query"], page etc..%> I dont like this, I have to build my links with careful consideration to what was posted previously etc.. What I'd like to do is <%=Html.BuildActionLinkUsingCurrentActionPostData ("Next", "Search", new { Page = 1}); where the anonymous dictionary overrides anything currently set by previous action. Essentially I care about what the previous action parameters were, because I want to reuse, it sounds simple, but start adding sort and loads of advance search options and it starts getting messy. Im probably missing something obvious

    Read the article

  • Getting problem in accessing web cam.

    - by Chetan
    Hi... I have written code in Java to access web cam,and to save image... I am getting following exceptions : Exception in thread "main" java.lang.NullPointerException at SwingCapture.(SwingCapture.java:40) at SwingCapture.main(SwingCapture.java:66) how to remove this exceptions. here is the code: import javax.swing.*; import javax.swing.event.; import java.io.; import javax.media.; import javax.media.format.; import javax.media.util.; import javax.media.control.; import javax.media.protocol.; import java.util.; import java.awt.; import java.awt.image.; import java.awt.event.; import com.sun.image.codec.jpeg.; public class SwingCapture extends Panel implements ActionListener { public static Player player = null; public CaptureDeviceInfo di = null; public MediaLocator ml = null; public JButton capture = null; public Buffer buf = null; public Image img = null; public VideoFormat vf = null; public BufferToImage btoi = null; public ImagePanel imgpanel = null; public SwingCapture() { setLayout(new BorderLayout()); setSize(320,550); imgpanel = new ImagePanel(); capture = new JButton("Capture"); capture.addActionListener(this); String str1 = "vfw:iNTEX IT-308 WC:0"; String str2 = "vfw:Microsoft WDM Image Capture (Win32):0"; di = CaptureDeviceManager.getDevice(str2); ml = di.getLocator(); try { player = Manager.createRealizedPlayer(ml); player.start(); Component comp; if ((comp = player.getVisualComponent()) != null) { add(comp,BorderLayout.NORTH); } add(capture,BorderLayout.CENTER); add(imgpanel,BorderLayout.SOUTH); } catch (Exception e) { e.printStackTrace(); } } public static void main(String[] args) { Frame f = new Frame("SwingCapture"); SwingCapture cf = new SwingCapture(); f.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { playerclose(); System.exit(0);}}); f.add("Center",cf); f.pack(); f.setSize(new Dimension(320,550)); f.setVisible(true); } public static void playerclose() { player.close(); player.deallocate(); } public void actionPerformed(ActionEvent e) { JComponent c = (JComponent) e.getSource(); if (c == capture) { // Grab a frame FrameGrabbingControl fgc = (FrameGrabbingControl) player.getControl("javax.media.control.FrameGrabbingControl"); buf = fgc.grabFrame(); // Convert it to an image btoi = new BufferToImage((VideoFormat)buf.getFormat()); img = btoi.createImage(buf); // show the image imgpanel.setImage(img); // save image saveJPG(img,"\test.jpg"); } } class ImagePanel extends Panel { public Image myimg = null; public ImagePanel() { setLayout(null); setSize(320,240); } public void setImage(Image img) { this.myimg = img; repaint(); } public void paint(Graphics g) { if (myimg != null) { g.drawImage(myimg, 0, 0, this); } } } public static void saveJPG(Image img, String s) { BufferedImage bi = new BufferedImage(img.getWidth(null), img.getHeight(null), BufferedImage.TYPE_INT_RGB); Graphics2D g2 = bi.createGraphics(); g2.drawImage(img, null, null); FileOutputStream out = null; try { out = new FileOutputStream(s); } catch (java.io.FileNotFoundException io) { System.out.println("File Not Found"); } JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out); JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(bi); param.setQuality(0.5f,false); encoder.setJPEGEncodeParam(param); try { encoder.encode(bi); out.close(); } catch (java.io.IOException io) { System.out.println("IOException"); } } }

    Read the article

  • how to group by over anonymous type with vb.net linq to object

    - by smoothdeveloper
    I'm trying to write a linq to object query in vb.net, here is the c# version of what I'm trying to achieve (I'm running this in linqpad): void Main() { var items = GetArray( new {a="a",b="a",c=1} , new {a="a",b="a",c=2} , new {a="a",b="b",c=1} ); ( from i in items group i by new {i.a, i.b} into g let p = new{ k = g, v = g.Sum((i)=>i.c)} where p.v > 1 select p ).Dump(); } // because vb.net doesn't support anonymous type array initializer, it will ease the translation T[] GetArray<T>(params T[] values){ return values; } I'm having hard time with either the group by syntax which is not the same (vb require 'identifier = expression' at some places, as well as with the summing functor with 'expression required' ) Thanks so much for your help!

    Read the article

  • Thank You for a Great Welcome for Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Yesterday morning we had two launch webcasts for Oracle GoldenGate 11g Release 2. I had the pleasure to present, as well as moderate the Q&A panels in both of these webcasts. Both events had hundreds of live attendees, sending us over 150 questions. Even though we left 30 minutes for Q&A, it was not nearly enough time to address for all the insightful questions our audience sent. Our product management team and I really appreciate the interaction we had yesterday and we are starting to respond back with outstanding questions today. Oracle GoldenGate’s new release launch also had great welcome from the media. You can find the links for various articles on the new release below: ITBusinessEdge Oracle Embraces Cross-Platform Data Integration Information Week: Oracle Real-Time Advance Taps Compressed Data Integration Developer News, Oracle GoldenGate Adds Deeper Oracle Integration, Extends Real-Time Performance CIO, Oracle GoldenGate Buddies Up with Sibling Software DBTA, Real-Time Data Integration: Oracle GoldenGate 11g Release 2 Now Available CBR Oracle unveils GoldenGate 11g Release 2 real-time data integration application In this blog, I want to address some of the frequently asked questions that came up during the webcasts. You can find the top questions and their answers along with related resources below. We will continue to address frequently asked questions via future blogs. Q: Will the new Integrated Capture for Oracle Database replace the Classic Capture? If not, which one do I use when? A: No, Classic Capture will be around for long time. Core platform specific features, bug fixes, and patches will be available for both Capture processes.Oracle Database specific features will be only available in the Integrated Capture. The Integrated Capture for Oracle Database is an option for users that need to capture data from compressed tables or need support for XML data types, XA on RAC. Users who don’t leverage these features should continue to use our Classic Capture. For more information on Oracle GoldenGate 11g Release 2 I recommend to check out the White paper: Oracle GoldenGate 11gR2 New Features as well as other technical white papers we have on OTN.                                                         For those of you coming to OpenWorld, please attend the related session: Extracting Data in Oracle GoldenGate Integrated Capture Mode, Monday Oct 1st 1:45pm Moscone South – 102 to learn more about this new feature. Q: What is new in Conflict Detection and Resolution? And how does it work? A: There are now pre-built functions to identify the conditions under which an error occurs and how to handle the record when the condition occurs. Error conditions handled include inserts into a target table where the row already exists, updates or deletes to target table rows that exist, but the original source data (before columns) do not match the existing data in the target row, and updates or deletes where the row does not exist in the target database table.Foreach of these conditions a method to handle the error is specified.  Please check out our recent blog on this topic and the White paper: Oracle GoldenGate 11gR2 New Features white paper.  Also, for those attending OpenWorld please attend the session: Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active-  Wednesday Oct 3rd  3:30pm Mascone 3000 Q: Does Oracle GoldenGate Veridata and the Management Pack require additional licenses, or is it incorporated with the GoldenGate license? A: Oracle GoldenGate Veridata and Oracle Management Pack for Oracle GoldenGate are additional products and require separate licenses. Please check out Oracle's price list here. Q: Does GoldenGate - Oracle Enterprise Manager Plug-in require additional license? A: Oracle Enterprise Manager Plug-in is included in the Oracle Management Pack for Oracle GoldenGate license, which is separate from Oracle GoldenGate license. There is no separate license for the Enterprise Manager Plug-in by itself. Oracle GoldenGate Monitor, Oracle GoldenGate Director, and Enterprise Manager Plug-in are included in the Management Pack for Oracle GoldenGate license. Please check out Management Pack for Oracle GoldenGate data sheet for more info on this product bundle. Q: Is Oracle GoldenGate replacing Oracle Streams product? A: Oracle GoldenGate is the strategic data replication product. Therefore, Oracle Streams will continue to be supported, but will not be actively enhanced. Rather, the best elements of Oracle Streams will be added to Oracle GoldenGate. Conflict management is one of them and with the latest release Oracle GoldenGate has a more advanced conflict management offering. Current customers depending on Oracle Streams will continue to be fully supported. Q: How is Oracle GoldenGate different than Oracle Data Integrator? A: Oracle Data Integrator is designed for fast bulk data movement and transformation between heterogeneous systems, while GoldenGate is designed for real-time movement of transactions between heterogeneous systems. These two products are completely complementary where GoldenGate provides low-impact real-time change data capture and delivery to a staging area on the target. And Oracle Data Integrator transforms this data and loads the DW tables. In fact, Oracle Data Integrator integrates with GoldenGate to use GoldenGate’s Capture process as one option for its CDC mechanism. We have several customers that deployed GoldenGate and ODI together to feed real-time data to their data warehousing solutions. Please also check out Oracle Data Integrator Changed Data Capture with Oracle GoldenGate Data Sheet (PDF). Thank you again very much for welcoming Oracle GoldenGate 11g Release 2 and stay in touch with us for more exciting news, updates, and events.

    Read the article

  • How to create generic class which takes 3 types.

    - by scope-creep
    I'm trying to make a generic class that takes 3 types, either a simple string, IList or a IList. public class OntologyStore { } public sealed class jim<T> where T:new() { IList<string> X = null; IList<OntologyStore> X1 = null; public bob() { if (typeof(T) == typeof(String)) { X = new List<string>(); } if (typeof(T) == typeof(OntologyStore)) { X1 = new List<OntologyStore>(); } } } I can easily create, which you would expect to work, jim<OntologyStore> x1=new jim<jim<OntologyStore>() as you would expect, but when I put in jim<string> x2=new jim<string>() the compiler reports the string is non abtract type, which you would expect. Is it possible to create a generic class, which can instantiate as a class which holds string, or a IList or an IList?

    Read the article

  • Display subform after validate main form

    - by bahamut100
    Hi, I want to display a new form after the validation of a form. I use Zend_Form_SubForm to do it (I Try to Zend_Form too), but this new form is not displaying. Why ?? This is the code in question : if ($this->_request->isPost()) { $formData = $this->_request->getPost(); if ($form->isValid($formData)) { $subForm = new Zend_Form_SubForm(); $subForm ->setAction('') ->setMethod('post'); $nom=new Zend_Form_Element_Text('nom'); $submit = new Zend_Form_Element_Submit('submit'); $submit->setLabel('OK'); $subForm->addElements(array($nom,$submit)); } else { $form->populate($formData); } }

    Read the article

  • subsonic in visual studio design host

    - by csizo
    Hi, I'm facing currently a problem regarding Subsonic configuration. What I want to achieve is using subsonic data access in a System.Web.UI.Design.ControlDesigner class. This class is hosted in Visual Studio Environment and enables design time operations on the attached System.Web.UI.WebControls.Control. The only problem is SubSonic seems always looking for SubSonicSection in the application configuration regardless passing connection string to it. The relevant code snippet: using (SharedDbConnectionScope dbScope = new SharedDbConnectionScope(new SqlDataProvider(), ConnectionString)) { Table1 _table1 = new Select().From<..().Where(...).IsEqualTo(...).ExecuteSingle<...>(); Throws exception on ExecuteSingle() method (configuration section was not found) while using (SharedDbConnectionScope dbScope = new SharedDbConnectionScope(ConnectionString)) { Throws exception on new SharedDbConnectionScope() (configuration section was not found) So the question is: Is there any way to pass the settings runtime to bypass the configuration section lookup as I don't want to add any subsonic specific configuration to devenv.configuration Thanks

    Read the article

  • Why doesn't my implementation of El Gamal work for long text strings?

    - by angstrom91
    I'm playing with the El Gamal cryptosystem, and my goal is to be able to encipher and decipher long sequences of text. I have come up with a method that works for short sequences, but does not work for long sequences, and I cannot figure out why. El Gamal requires the plaintext to be an integer. I have turned my string into a byte[] using the .getBytes() method for Strings, and then created a BigInteger out of the byte[]. After encryption/decryption, I turn the BigInteger into a byte[] using the .toByteArray() method for BigIntegers, and then create a new String object from the byte[]. This works perfectly when i call ElGamalEncipher with strings up to 129 characters. With 130 or more characters, the output produced is garbled. Can someone suggest how to solve this issue? Is this an issue with my method of turning the string into a BigInteger? If so, is there a better way to turn my string of text into a BigInteger and back? Below is my encipher/decipher code with a program to demonstrate the problem. import java.math.BigInteger; public class Main { static BigInteger P = new BigInteger("15893293927989454301918026303382412" + "2586402937727056707057089173871237566896685250125642378268385842" + "6917261652781627945428519810052550093673226849059197769795219973" + "9423619267147615314847625134014485225178547696778149706043781174" + "2873134844164791938367765407368476144402513720666965545242487520" + "288928241768306844169"); static BigInteger G = new BigInteger("33234037774370419907086775226926852" + "1714093595439329931523707339920987838600777935381196897157489391" + "8360683761941170467795379762509619438720072694104701372808513985" + "2267495266642743136795903226571831274837537691982486936010899433" + "1742996138863988537349011363534657200181054004755211807985189183" + "22832092343085067869"); static BigInteger R = new BigInteger("72294619754760174015019300613282868" + "7219874058383991405961870844510501809885568825032608592198728334" + "7842806755320938980653857292210955880919036195738252708294945320" + "3969657021169134916999794791553544054426668823852291733234236693" + "4178738081619274342922698767296233937873073756955509269717272907" + "8566607940937442517"); static BigInteger A = new BigInteger("32189274574111378750865973746687106" + "3695160924347574569923113893643975328118502246784387874381928804" + "6865920942258286938666201264395694101012858796521485171319748255" + "4630425677084511454641229993833255506759834486100188932905136959" + "7287419551379203001848457730376230681693887924162381650252270090" + "28296990388507680954"); public static void main(String[] args) { FewChars(); System.out.println(); ManyChars(); } public static void FewChars() { //ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) BigInteger[] cipherText = ElGamal.ElGamalEncipher("This is a string " + "of 129 characters which works just fine . This is a string " + "of 129 characters which works just fine . This is a s", P, G, R); System.out.println("This is a string of 129 characters which works " + "just fine . This is a string of 129 characters which works " + "just fine . This is a s"); //ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) String output = ElGamal.ElGamalDecipher(cipherText[0], cipherText[1], A, P); System.out.println("The decrypted text is: " + output); } public static void ManyChars() { //ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) BigInteger[] cipherText = ElGamal.ElGamalEncipher("This is a string " + "of 130 characters which doesn’t work! This is a string of " + "130 characters which doesn’t work! This is a string of ", P, G, R); System.out.println("This is a string of 130 characters which doesn’t " + "work! This is a string of 130 characters which doesn’t work!" + " This is a string of "); //ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) String output = ElGamal.ElGamalDecipher(cipherText[0], cipherText[1], A, P); System.out.println("The decrypted text is: " + output); } } import java.math.BigInteger; import java.security.SecureRandom; public class ElGamal { public static BigInteger[] ElGamalEncipher(String plaintext, BigInteger p, BigInteger g, BigInteger r) { // returns a BigInteger[] cipherText // cipherText[0] is c // cipherText[1] is d SecureRandom sr = new SecureRandom(); BigInteger[] cipherText = new BigInteger[2]; BigInteger pText = new BigInteger(plaintext.getBytes()); // 1: select a random integer k such that 1 <= k <= p-2 BigInteger k = new BigInteger(p.bitLength() - 2, sr); // 2: Compute c = g^k(mod p) BigInteger c = g.modPow(k, p); // 3: Compute d= P*r^k = P(g^a)^k(mod p) BigInteger d = pText.multiply(r.modPow(k, p)).mod(p); // C =(c,d) is the ciphertext cipherText[0] = c; cipherText[1] = d; return cipherText; } public static String ElGamalDecipher(BigInteger c, BigInteger d, BigInteger a, BigInteger p) { //returns the plaintext enciphered as (c,d) // 1: use the private key a to compute the least non-negative residue // of an inverse of (c^a)' (mod p) BigInteger z = c.modPow(a, p).modInverse(p); BigInteger P = z.multiply(d).mod(p); byte[] plainTextArray = P.toByteArray(); return new String(plainTextArray); } }

    Read the article

  • How to use map/reduce to handle more than 10000 unique keys for grouping in MongoDB?

    - by Magnus Johansson
    I am using MongoDB v1.4 and the mongodb-csharp driver and I try to group on a data store that has more than 10000 keys, so I get this error: assertion: group() can't handle more than 10000 unique keys using c# code like this: Document query = new Document().Append("group", new Document() .Append("key", new Document().Append("myfieldname", true)) .Append("$reduce", new CodeWScope( "function(obj,prev) { prev.count++; }")) .Append("initial", new Document().Append("count", 0)) .Append("ns", "myitems")); I read that I should use map/reduce, but I can't figure out how. Can somebody please shed some light on how to use map/reduce? Or is there any other way to get around this limitation? Thanks.

    Read the article

  • Create PDF in memory instead of physical file

    - by acadia
    How do one create PDF in memorystream instead of physical file using itextsharp. The code below is creating actual pdf file. Instead how can I create a byte[] and store it in the byte[] so that I can return it through a function using iTextSharp.text; using iTextSharp.text.pdf; Document doc = new Document(iTextSharp.text.PageSize.LETTER, 10, 10, 42, 35); PdfWriter wri = PdfWriter.GetInstance(doc, new FileStream("c:\\Test11.pdf", FileMode.Create)); doc.Open();//Open Document to write Paragraph paragraph = new Paragraph("This is my first line using Paragraph."); Phrase pharse = new Phrase("This is my second line using Pharse."); Chunk chunk = new Chunk(" This is my third line using Chunk."); doc.Add(paragraph); doc.Add(pharse); doc.Add(chunk); doc.Close(); //Close document

    Read the article

  • Create PDF in memory instead of physical file using C#

    - by acadia
    Hello, How do one create PDF in memorystream instead of physical file using itextsharp. The code below is creating actual pdf file. Instead how can I create a byte[] and store it in the byte[] so that I can return it through a function using iTextSharp.text; using iTextSharp.text.pdf; Document doc = new Document(iTextSharp.text.PageSize.LETTER, 10, 10, 42, 35); PdfWriter wri = PdfWriter.GetInstance(doc, new FileStream("c:\\Test11.pdf", FileMode.Create)); doc.Open();//Open Document to write Paragraph paragraph = new Paragraph("This is my first line using Paragraph."); Phrase pharse = new Phrase("This is my second line using Pharse."); Chunk chunk = new Chunk(" This is my third line using Chunk."); doc.Add(paragraph); doc.Add(pharse); doc.Add(chunk); doc.Close(); //Close document

    Read the article

  • C# how do i make picturebox to form

    - by cheesebunz
    I'm unable to make my pictureboxes to be shown on form. Am i doing it wrong or? This is my code: static Bitmap[] pictures = new Bitmap[9]; PictureBox[] picBox= new PictureBox[9]; on the constructor : pictures[1] = new Bitmap(@"1.1Bright.jpg"); * picBox[1].Location = new System.Drawing.Point(25, 7); picBox[1].SizeMode = PictureBoxSizeMode.StretchImage; picBox[1].ClientSize = new Size(53, 40); picBox[1].Image = pictures[1]; I keep getting nullreferenceexception error on *

    Read the article

  • Databind' is not a member of 'CrystalDecisions.Windows.Forms.CrystalReportViewer'

    - by Selom
    Hi, ive being trying to cope with this problem but still. I need your help. Im having the following error message: Databind' is not a member of 'CrystalDecisions.Windows.Forms.CrystalReportViewer' in my code: Dim rpt As New CrystalReport1() Dim da As New SQLiteDataAdapter Dim ds As New presbydbDataSet 'Dim cmd As New SQLiteCommand("SELECT personal_details.fn, training.training_level FROM personal_details INNER JOIN training ON Staff_ID WHERE personal_details.staff_ID='" + detailsFrm.Label13.Text + "'", conn) Dim cmd As New SQLiteCommand("SELECT * FROM personal_details WHERE personal_details.staff_ID='" + detailsFrm.Label13.Text + "'; SELECT * FROM training WHERE training.staff_ID='" + detailsFrm.Label13.Text + "'", conn) cmd.ExecuteNonQuery() da.SelectCommand = cmd da.Fill(ds) rpt.SetDataSource(ds) rpt.Subreports.Item("personal_detailsRpt").SetDataSource(ds.Tables("personal_details")) rpt.Subreports.Item("trainingRpt").SetDataSource(ds.Tables("training")) CrystalReportViewer1.ReportSource = rpt CrystalReportViewer1.DataBind() Im using vb.net and these are the imports Im using: Imports System.Data.SQLite Imports System.Configuration Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.ReportAppServer Please how can I get rid of this error? Thanks for answering

    Read the article

  • Java code optimization on matrix windowing computes in more time

    - by rano
    I have a matrix which represents an image and I need to cycle over each pixel and for each one of those I have to compute the sum of all its neighbors, ie the pixels that belong to a window of radius rad centered on the pixel. I came up with three alternatives: The simplest way, the one that recomputes the window for each pixel The more optimized way that uses a queue to store the sums of the window columns and cycling through the columns of the matrix updates this queue by adding a new element and removing the oldes The even more optimized way that does not need to recompute the queue for each row but incrementally adjusts a previously saved one I implemented them in c++ using a queue for the second method and a combination of deques for the third (I need to iterate through their elements without destructing them) and scored their times to see if there was an actual improvement. it appears that the third method is indeed faster. Then I tried to port the code to Java (and I must admit that I'm not very comfortable with it). I used ArrayDeque for the second method and LinkedLists for the third resulting in the third being inefficient in time. Here is the simplest method in C++ (I'm not posting the java version since it is almost identical): void normalWindowing(int mat[][MAX], int cols, int rows, int rad){ int i, j; int h = 0; for (i = 0; i < rows; ++i) { for (j = 0; j < cols; j++) { h = 0; for (int ry =- rad; ry <= rad; ry++) { int y = i + ry; if (y >= 0 && y < rows) { for (int rx =- rad; rx <= rad; rx++) { int x = j + rx; if (x >= 0 && x < cols) { h += mat[y][x]; } } } } } } } Here is the second method (the one optimized through columns) in C++: void opt1Windowing(int mat[][MAX], int cols, int rows, int rad){ int i, j, h, y, col; queue<int>* q = NULL; for (i = 0; i < rows; ++i) { if (q != NULL) delete(q); q = new queue<int>(); h = 0; for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][rx]; } } q->push(mem); h += mem; } } for (j = 1; j < cols; j++) { col = j + rad; if (j - rad > 0) { h -= q->front(); q->pop(); } if (j + rad < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][col]; } } q->push(mem); h += mem; } } } } And here is the Java version: public static void opt1Windowing(int [][] mat, int rad){ int i, j = 0, h, y, col; int cols = mat[0].length; int rows = mat.length; ArrayDeque<Integer> q = null; for (i = 0; i < rows; ++i) { q = new ArrayDeque<Integer>(); h = 0; for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][rx]; } } q.addLast(mem); h += mem; } } j = 0; for (j = 1; j < cols; j++) { col = j + rad; if (j - rad > 0) { h -= q.peekFirst(); q.pop(); } if (j + rad < cols) { int mem = 0; for (int ry =- rad; ry <= rad; ry++) { y = i + ry; if (y >= 0 && y < rows) { mem += mat[y][col]; } } q.addLast(mem); h += mem; } } } } I recognize this post will be a wall of text. Here is the third method in C++: void opt2Windowing(int mat[][MAX], int cols, int rows, int rad){ int i = 0; int j = 0; int h = 0; int hh = 0; deque< deque<int> *> * M = new deque< deque<int> *>(); for (int ry = 0; ry <= rad; ry++) { if (ry < rows) { deque<int> * q = new deque<int>(); M->push_back(q); for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int val = mat[ry][rx]; q->push_back(val); h += val; } } } } deque<int> * C = new deque<int>(M->front()->size()); deque<int> * Q = new deque<int>(M->front()->size()); deque<int> * R = new deque<int>(M->size()); deque< deque<int> *>::iterator mit; deque< deque<int> *>::iterator mstart = M->begin(); deque< deque<int> *>::iterator mend = M->end(); deque<int>::iterator rit; deque<int>::iterator rstart = R->begin(); deque<int>::iterator rend = R->end(); deque<int>::iterator cit; deque<int>::iterator cstart = C->begin(); deque<int>::iterator cend = C->end(); for (mit = mstart, rit = rstart; mit != mend, rit != rend; ++mit, ++rit) { deque<int>::iterator pit; deque<int>::iterator pstart = (* mit)->begin(); deque<int>::iterator pend = (* mit)->end(); for(cit = cstart, pit = pstart; cit != cend && pit != pend; ++cit, ++pit) { (* cit) += (* pit); (* rit) += (* pit); } } for (i = 0; i < rows; ++i) { j = 0; if (i - rad > 0) { deque<int>::iterator cit; deque<int>::iterator cstart = C->begin(); deque<int>::iterator cend = C->end(); deque<int>::iterator pit; deque<int>::iterator pstart = (M->front())->begin(); deque<int>::iterator pend = (M->front())->end(); for(cit = cstart, pit = pstart; cit != cend; ++cit, ++pit) { (* cit) -= (* pit); } deque<int> * k = M->front(); M->pop_front(); delete k; h -= R->front(); R->pop_front(); } int row = i + rad; if (row < rows && i > 0) { deque<int> * newQ = new deque<int>(); M->push_back(newQ); deque<int>::iterator cit; deque<int>::iterator cstart = C->begin(); deque<int>::iterator cend = C->end(); int rx; int tot = 0; for (rx = 0, cit = cstart; rx <= rad; rx++, ++cit) { if (rx < cols) { int val = mat[row][rx]; newQ->push_back(val); (* cit) += val; tot += val; } } R->push_back(tot); h += tot; } hh = h; copy(C->begin(), C->end(), Q->begin()); for (j = 1; j < cols; j++) { int col = j + rad; if (j - rad > 0) { hh -= Q->front(); Q->pop_front(); } if (j + rad < cols) { int val = 0; for (int ry =- rad; ry <= rad; ry++) { int y = i + ry; if (y >= 0 && y < rows) { val += mat[y][col]; } } hh += val; Q->push_back(val); } } } } And finally its Java version: public static void opt2Windowing(int [][] mat, int rad){ int cols = mat[0].length; int rows = mat.length; int i = 0; int j = 0; int h = 0; int hh = 0; LinkedList<LinkedList<Integer>> M = new LinkedList<LinkedList<Integer>>(); for (int ry = 0; ry <= rad; ry++) { if (ry < rows) { LinkedList<Integer> q = new LinkedList<Integer>(); M.addLast(q); for (int rx = 0; rx <= rad; rx++) { if (rx < cols) { int val = mat[ry][rx]; q.addLast(val); h += val; } } } } int firstSize = M.getFirst().size(); int mSize = M.size(); LinkedList<Integer> C = new LinkedList<Integer>(); LinkedList<Integer> Q = null; LinkedList<Integer> R = new LinkedList<Integer>(); for (int k = 0; k < firstSize; k++) { C.add(0); } for (int k = 0; k < mSize; k++) { R.add(0); } ListIterator<LinkedList<Integer>> mit; ListIterator<Integer> rit; ListIterator<Integer> cit; ListIterator<Integer> pit; for (mit = M.listIterator(), rit = R.listIterator(); mit.hasNext();) { Integer r = rit.next(); int rsum = 0; for (cit = C.listIterator(), pit = (mit.next()).listIterator(); cit.hasNext();) { Integer c = cit.next(); Integer p = pit.next(); rsum += p; cit.set(c + p); } rit.set(r + rsum); } for (i = 0; i < rows; ++i) { j = 0; if (i - rad > 0) { for(cit = C.listIterator(), pit = M.getFirst().listIterator(); cit.hasNext();) { Integer c = cit.next(); Integer p = pit.next(); cit.set(c - p); } M.removeFirst(); h -= R.getFirst(); R.removeFirst(); } int row = i + rad; if (row < rows && i > 0) { LinkedList<Integer> newQ = new LinkedList<Integer>(); M.addLast(newQ); int rx; int tot = 0; for (rx = 0, cit = C.listIterator(); rx <= rad; rx++) { if (rx < cols) { Integer c = cit.next(); int val = mat[row][rx]; newQ.addLast(val); cit.set(c + val); tot += val; } } R.addLast(tot); h += tot; } hh = h; Q = new LinkedList<Integer>(); Q.addAll(C); for (j = 1; j < cols; j++) { int col = j + rad; if (j - rad > 0) { hh -= Q.getFirst(); Q.pop(); } if (j + rad < cols) { int val = 0; for (int ry =- rad; ry <= rad; ry++) { int y = i + ry; if (y >= 0 && y < rows) { val += mat[y][col]; } } hh += val; Q.addLast(val); } } } } I guess that most is due to the poor choice of the LinkedList in Java and to the lack of an efficient (not shallow) copy method between two LinkedList. How can I improve the third Java method? Am I doing some conceptual error? As always, any criticisms is welcome. UPDATE Even if it does not solve the issue, using ArrayLists, as being suggested, instead of LinkedList improves the third method. The second one performs still better (but when the number of rows and columns of the matrix is lower than 300 and the window radius is small the first unoptimized method is the fastest in Java)

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • How does TransactionScope guarantee data integrity across multiple databases?

    - by Bas Smit
    Hey guys, Can someone tell me the principle of how TransactionScope guarantees data integrity across multiple databases? I imagine it first sends the commands to the databases and then waits for the databases to respond before sending them a message to apply the command sent earlier. However when execution is stopped abruptly when sending those apply messages we could still end up with a database that has applied the command and one that has not. Can anyone shed some light on this? Edit: I guess what Im asking is can I rely on TransactionScope to guarantee data integrity when writing to multiple databases in case of a power outage or a sudden shutdown. Thanks, Bas Example: using(var scope=new TransactionScope()) { using (var context = new FirstEntities()) { context.AddToSomethingSet(new Something()); context.SaveChanges(); } using (var context = new SecondEntities()) { context.AddToSomethingElseSet(new SomethingElse()); context.SaveChanges(); } scope.Complete(); }

    Read the article

  • Java: how to register a listener that listen to a JFrame movement

    - by cocotwo
    How can you track the movement of a JFrame itself? I'd like to register a listener that would be called back every single time JFrame.getLocation() is going to return a new value. Here's a skeleton that compiles and runs, what kind of listener should I add so that I can track every JFrame movement on screen? import javax.swing.*; public class SO { public static void main( String[] args ) throws Exception { SwingUtilities.invokeAndWait( new Runnable() { public void run() { final JFrame jf = new JFrame(); final JPanel jp = new JPanel(); final JLabel jl = new JLabel(); updateText( jf, jl ); jp.add( jl ); jf.add( jp ); jf.pack(); jf.setVisible( true ); } } ); } private static void updateText( final JFrame jf, final JLabel jl ) { jl.setText( "JFrame is located at: " + jf.getLocation() ); jl.repaint(); } }

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >