Search Results

Search found 4677 results on 188 pages for 'alternative'.

Page 48/188 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Dual Screen will only mirror after 12.04 upgrade

    - by Ne0
    I have been using Ubuntu with a dual screen for years now, after upgrading to 12.04 LTS i cannot get my dual screen working properly Graphics: 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] 01:00.1 Display controller: Advanced Micro Devices [AMD] nee ATI RV350 AR [Radeon 9600] (Secondary) I noticed i was using open source drivers and attempted to install official binaries using the methods in this thread. Output: liam@liam-desktop:~$ sudo apt-get install fglrx fglrx-amdcccle Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: fglrx fglrx-amdcccle 2 upgraded, 0 newly installed, 0 to remove and 12 not upgraded. Need to get 45.1 MB of archives. After this operation, 739 kB of additional disk space will be used. Get:1 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx i386 2:8.960-0ubuntu1 [39.2 MB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ precise/restricted fglrx-amdcccle i386 2:8.960-0ubuntu1 [5,883 kB] Fetched 45.1 MB in 1min 33s (484 kB/s) (Reading database ... 328081 files and directories currently installed.) Preparing to replace fglrx 2:8.951-0ubuntu1 (using .../fglrx_2%3a8.960-0ubuntu1_i386.deb) ... Removing all DKMS Modules Error! There are no instances of module: fglrx 8.951 located in the DKMS tree. Done. Unpacking replacement fglrx ... Preparing to replace fglrx-amdcccle 2:8.951-0ubuntu1 (using .../fglrx-amdcccle_2%3a8.960-0ubuntu1_i386.deb) ... Unpacking replacement fglrx-amdcccle ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Setting up fglrx (2:8.960-0ubuntu1) ... update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/fglrx/ld.so.conf because link group i386-linux-gnu_gl_conf is broken. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-initramfs: deferring update (trigger activated) update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Loading new fglrx-8.960 DKMS files... Building only for 3.2.0-25-generic-pae Building for architecture i686 Building initial module for 3.2.0-25-generic-pae Done. fglrx: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-25-generic-pae/updates/dkms/ depmod....... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle (2:8.960-0ubuntu1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place liam@liam-desktop:~$ sudo aticonfig --initial -f aticonfig: No supported adapters detected When i attempt to get my settings back to what they were before upgrading i get this message requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) and GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._gnome_2drr_2derror_2dquark.Code3: requested position/size for CRTC 81 is outside the allowed limit: position=(1440, 0), size=(1440, 900), maximum=(1680, 1680) Any idea's on what i need to do to fix this issue?

    Read the article

  • Change the Way Google Search Results Display in Firefox

    - by Asian Angel
    Are you tired of the default look for search results at Google? If you want a different and customized pleasing look for them, then join us as we look at the GoogleMonkeyR User Script. Note: User Style Scripts & User Scripts can be added to most browsers but we are using Firefox & the Greasemonkey extension for our example here. Before Here is the standard look for search results at Google…not bad but it really does not stand out that well either. Installing the User Script You may be asking yourself what makes this particular user script different from others. Take a look at the list of goodies that you get access to and you will understand: Multiple columns of results Removes “Sponsored Links” Add numbers to the results Auto-load more results Removes web search dialogues Open links in a new tab Favicons GooglePreview Self updating Can be configured from a simple user dialogue To get started click on the Webpage Install Button. Once you click on the Webpage Install Button you will see the following window asking for confirmation to add the user script to Firefox. Click Install to complete the process. GoogleMonkeyR in Action Refreshing the same search page shown above shows a noticeable difference already. The light blue background makes the search results stand out a bit better. This is an improvement from before but you will definitely want to have a look to see just how far you can go… Right click on the Greasemonkey Status Bar Icon, go to User Script Commands, and select GoogleMonkeyR Preferences. Once you have clicked on GoogleMonkeyR Preferences the search page will be shaded out and you will have access to the user script’s preferences. This is where you can really make your search results unique looking! Here are the changes that we started out with… After refreshing our search results things looked even better. A look at the entire page of results with our browser maximized and set for two columns. If you have the Auto load more results Option enabled new results will be added very quickly as you scroll down. Our set of search results after adding Favicons & GooglePreview Images. Conclusion If you have been wanting a more dramatic and pleasing look for the search results at Google then you can not go wrong with the GoogleMonkeyR User Script. Change as little or as much as you want to get that perfect look in your browser. Link Install the GoogleMonkeyR User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysMake Firefox Show Google Results for Default Address Bar SearchesCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same Tab TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • Using Live Data in Database Development Work

    - by Phil Factor
    Guest Editorial for Simple-Talk Newsletter... in which Phil Factor reacts with some exasperation when coming across a report that a majority of companies were still using financial and personal data for both developing and testing database applications. If you routinely test your development work using real production data that contains personal or financial information, you are probably being irresponsible, and at worst, risking a heavy financial penalty for your company. Surprisingly, over 80% of financial companies still do this. Plenty of data breaches and fraud have happened from the use of real data for testing, and a data breach is a nightmare for any organisation that suffers one. The cost of each data breach averages out at around $7.2 million in the US in notification, escalation, credit monitoring, fines, litigation, legal costs, and lost business due to customer churn, £1.9 million in the UK. 70% of data breaches are done from within the organisation. Real data can be exploited in a number of ways for malicious or criminal purposes. It isn't just the obvious use of items such as name and address, date of birth, social security number, and credit card and bank account numbers: Data can be exploited in many subtle ways, so there are excellent reasons to ensure that a high priority is given to the detection and prevention of any data breaches. You'll never successfully guess all the ways that real data can be exploited maliciously, or the ease with which it can be accessed. It would be silly to argue that developers never need access to a copy of the database containing live data. Developers sometimes need to track a bug that can only be replicated on the data from the live database. However, it has to be done in a very restrictive harness. The law makes no distinction between development and production databases when a data breach occurs, so the data has to be held with all appropriate security measures in place. In Europe, the use of personal data for testing requires the explicit consent of the people whose data is being held. There are federal standards such as GLBA, PCI DSS and HIPAA, and most US States have privacy legislation. The task of ensuring compliance and tight security in such circumstances is an expensive and time-consuming overhead. The developer is likely to suffer investigation if a data breach occurs, even if the company manages to stay in business. Ironically, the use of copies of live data isn't usually the most effective way to develop or test your data. Data is usually time-specific and isn't usually current by the time it is used for testing, Existing data doesn't help much for new functionality, and every time the data is refreshed from production, any test data is likely to be overwritten. Also, it is not always going to test all the 'edge' conditions that are likely to flush out bugs. You still have the task of simulating the dynamics of actual usage of the database, and here you have no alternative to creating 'spoofed' data. Because of the complexities of relational data, It used to be that there was no realistic alternative to developing and testing with live data. However, this is no longer the case. Real data can be obfuscated, or it can be created entirely from scratch. The latter process used to be impractical, now that there are plenty of third-party tools to choose from. The process of obfuscation isn't risk free. The process must access the live data, and the success of the obfuscation process has to be carefully monitored. Database data security isn't an exciting topic to you or I, but to a hacker it can be an all-consuming obsession, especially if there is financial or political gain involved. This is not the sort of adversary one would wish for and it is far better to accept, and work with, security restrictions that exist for using live data in database development work, especially when the tools exist to create large realistic database test data that can be better for several aspects of testing.

    Read the article

  • Some OBI EE Tricks and Tips in the Admin Tool By Gerry Langton

    - by hamsun
    How to set the log level from a Session variable Initialization block As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level. The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment. Open the appropriate rpd in on-line mode and navigate to Manage Variables. Select Session Initialization Blocks, right click in the white space and create a New Initialization Block. I called the Initialization block Set_Loglevel . Now click on ‘Edit Data Source’ to enter the SQL. Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode. The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid. Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block. Check the ‘Enable any user to set the value’ option so that this will work for any user. Click OK and the following message will display as LOGLEVEL is a system session variable: Click ‘Yes’. Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes. To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value. If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor. Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box: SET VARIABLE LOGLEVEL = 2; Click the ‘Apply SQL’ button. The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log. You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at [email protected] if you have any further questions. About the Author: Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

    Read the article

  • In hindsight, is basing XAML on XML a mistake or a good approach?

    - by romkyns
    XAML is essentially a subset of XML. One of the main benefits of basing XAML on XML is said to be that it can be parsed with existing tools. And it can, to a large degree, although the (syntactically non-trivial) attribute values will stay in text form and require further parsing. There are two major alternatives to describing a GUI in an XML-derived language. One is to do what WinForms did, and describe it in real code. There are numerous problems with this, though it’s not completely advantage-free (a question to compare XAML to this approach). The other major alternative is to design a completely new syntax specifically tailored for the task at hand. This is generally known as a domain-specific language. So, in hindsight, and as a lesson for the future generations, was it a good idea to base XAML on XML, or would it have been better as a custom-designed domain-specific language? If we were designing an even better UI framework, should we pick XML or a custom DSL? Since it’s much easier to think positively about the status quo, especially one that is quite liked by the community, I’ll give some example reasons for why building on top of XML might be considered a mistake. Basing a language off XML has one thing going for it: it’s much easier to parse (the core parser is already available), requires much, much less design work, and alternative parsers are also much easier to write for 3rd party developers. But the resulting language can be unsatisfying in various ways. It is rather verbose. If you change the type of something, you need to change it in the closing tag. It has very poor support for comments; it’s impossible to comment out an attribute. There are limitations placed on the content of attributes by XML. The markup extensions have to be built "on top" of the XML syntax, not integrated deeply and nicely into it. And, my personal favourite, if you set something via an attribute, you use completely different syntax than if you set the exact same thing as a content property. It’s also said that since everyone knows XML, XAML requires less learning. Strictly speaking this is true, but learning the syntax is a tiny fraction of the time spent learning a new UI framework; it’s the framework’s concepts that make the curve steep. Besides, the idiosyncracies of an XML-based language might actually add to the "needs learning" basket. Are these disadvantages outweighted by the ease of parsing? Should the next cool framework continue the tradition, or invest the time to design an awesome DSL that can’t be parsed by existing tools and whose syntax needs to be learned by everyone? P.S. Not everyone confuses XAML and WPF, but some do. XAML is the XML-like thing. WPF is the framework with support for bindings, theming, hardware acceleration and a whole lot of other cool stuff.

    Read the article

  • return the result of a query and the total number of rows in a single function

    - by csotelo
    This is a question as might be focused on working in the best way, if there are other alternatives or is the only way: Using Codeigniter ... I have the typical 2 functions of list records and show total number of records (using the page as an alternative). The problem is that they are rather large. Sample 2 functions in my model: count Rows: function get_all_count() { $this->db->select('u.id_user'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); $query = $this->db->get(); return $query->num_rows(); } results per page function get_all($limit, $offset, $sort = '') { $this->db->select('u.id_user, user, email, flag'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); if($sort) $this->db->order_by($sort); $this->db->limit($limit, $offset); $query = $this->db->get(); return $query->result(); } You see, I repeat the most of the functions, the difference is that only the number of fields and management pages. I wonder if there is any alternative to get as much results as the query in a single function. I have seen many tutorials, and all create 2 functions: one to count and another to show results ... Will there be more optimal?

    Read the article

  • Recommendations on developing a WPF application without using MVVM or similar

    - by Metro Smurf
    We were building out the next version of an in-house thick-client application using WPF/Prism (Composite Application Library). As we were nearly done with the client our team was put under new management and shortly thereafter: We were then directed to drop the Prism framework to keep things simple. This includes not using any type of Inversion of Control. We were directed to build out the WPF application without using MVVM or similar; and more along the lines of a traditional WinForm application. The idea is that if a developer sees a control in Visual Studio’s designer view, then (s)he should be able to click on the control and see exactly what it's doing without having to traverse through a view-model (or similar). We have now been tasked with building out the WPF application using one primary Window, use a Frame Control to contain the content, and use a Ribbon outside of the frame for the menu items. Reason we were provided to use Frame Control: a. We will show a view in the Frame with a Page (not a user control) and then load the page in the Frame. b. When a new view is to be shown in the Frame, the current view (Page) will be closed/disposed and the new view (Page) will take its place in the Frame. c. When a developer looks at the Page in design view, (s)he will be able to click on any control and see exactly what is being done. Given the restrictions of 1 and 2 above, we’d like to present another method of building out the application that: Can be presented as an alternative to using the “Frame Methodology” (item 3 above) but still provides the same type of functionality. Does not use MVVM (see #1 and #2 above). Provided the direction we’ve been given, any suggestions as to an alternative we can present? I’d request that the responses be kept on the professional level and thank you in advance.

    Read the article

  • gunit syntax for tree walker with a flat list of nodes

    - by Kaleb Pederson
    Here's a simple gunit test for a portion of my tree grammar which generates a flat list of nodes: objectOption walks objectOption: <<one:"value">> -> (one "value") Although you define a tree in ANTLR's rewrite syntax using a caret (i.e. ^(ROOT child...)), gunit matches trees without the caret, so the above represents a tree and it's not surprising that it fails: it's a flat list of nodes and not a tree. This results in a test failure: 1 failures found: test2 (objectOption walks objectOption, line17) - expected: (one \"value\") actual: one \"value\" Another option which seems intuitive is to leave off the parenthesis, like this: objectOption walks objectOption: <<one:"value">> -> one "value" But gunit doesn't like this syntax. It seems to result in a parse failure in the gunit grammar: line 17:20 no viable alternative at input 'one' line 17:24 missing ':' at 'value' line 0:-1 no viable alternative at input '<EOF>' java.lang.NullPointerException at org.antlr.gunit.OutputTest.getExpected(OutputTest.java:65) at org.antlr.gunit.gUnitExecutor.executeTests(gUnitExecutor.java:245) ... What is the correct way to match a flat tree?

    Read the article

  • SCM for Xcode?

    - by Gregor
    I am developing an application for the Mac as a small team (me + another person) effort. We are located in different cities, and have started to see the need for solid source control management. None of us have any experience with this, and both of us are relatively new to Cocoa/Obj-C/Xcode (but do have C knowledge). Does anyone have any recommendations as to which SCM system to choose? I understand that a lot of people are using Subversion, which is also supported in Xcode 3.1. Does anyone have experience with using Subversion through Xcode? Or is it a better option to chose a stand alone GUI alternative, such as Versions? Grateful for any input on this. Gregor Tomasevic, Sweden Update/personal experiences: Since this post, we have tried Versions and Cornerstone (both of which are SVN GUI-clients), as well as Xcodes built-in support for SVN. We were not particularly pleased with Versions, which seemed to have some problems with committing unversioned files/build files. The built-in SVN support in Xcode works quite well, although it probably has limitations that we have still not run into. Cornerstone is both simple to use and powerful, and does not seem to suffer from the problems we encountered with Versions. So far, we have just tried committing, updating repo, checking out latest/previous versions of our files and worked some with file comparison. It might be a whole different ball game once you start working extensively with branching, an area which we have been told both these GUI clients might have some weaknesses in. For what it's worth (and with only days of evaluation) Cornerstone seems to be a somewhat better alternative, although for simpler SCM, Xcode works well too. Thanks for all the comments.

    Read the article

  • ASP.NET MVC 1 and 2 on Mono 2.4 with Fluent NHibernate

    - by SztupY
    Hi! I'd like to create an application using ASP.NET MVC, that should run under mono 2.4 (compiling will be done on a Windows box). Has anyone getting luck with this? Here is what I've already tried: ASP.NET MVC on mono without any persistence model support, and using nhaml as the view engine S#aml architecture, which is a quite good framework imho, but it depends too much on stuff, that are not working good under mono (like windsor) The first part worked fine, I didn't encounter any major problems. But I couldn't get the second part working. It seems it's dependency on Castle.Windsor breaks the whole mono support (but there might be other parts too). Therefore I decided to create an alternative framework, that borrows some of the ideas of s#arp-architecture, but designed to be working under mono (and if I'm able to do this I'll release it for the community of course). The controller and view part is working fine (not much magic here though, they have been always working), but I have some questions before I start job on the persistence part: What NHibernate versions are working under mono? I've heard 1.2 is working fine. Does 2.0.1/2.1 beta work under mono? Does Fluent.NHibernate and NHibernate.Linq work under mono? (for the latter it seems it needs some dependcies that aren't avaialable in mono) Are there any good alternatives for persistence support to NHibernate under mono? Alternative questions: Are there any frameworks that have mono+persistence+asp.net mvc support already or am I the first one to think about this? If you have already done this: what are your opinions on stability/usability? Thanks for the answers EDIT: Updated the framework to support ASP.NET MVC 2: http://shaml.sztupy.hu/

    Read the article

  • WCF: WTF! Does WCF raise the bar or just the complexity level?

    - by rp
    I understand the value of the three-part service/host/client model offered by WCF. But is it just me or does it seem like WCF took something pretty direct and straightforward (the ASMX model) and made a mess out of it? Is there an alternative to using SvcUtil's command line step back in time to generate the proxy? With ASMX services a test harness was automatically provided; is there a good alternative today with WCF? I appreciate that the WS* stuff is more tightly integrated with WCF and hope to find some payoff for WCF there, but geeze, otherwise I'm perplexed. Also, the state of books available for WCF is abysmal at best. Juval Lowy, a superb author, has written a good O'Reilly reference book "Programming WCF Services" but it doesn't do that much (for me anyway) for learning now to use WCF. That book's precursor (and a little better organized, but not much, as a tutorial) is Michele Leroux Bustamante's Learning WCF. It has good spots but is outdated in place and its corresponding Web site is gone. Do you have good WCF learning references besides just continuing to Google the bejebus out of things? Thanks, rp

    Read the article

  • Twitter4J throws exception in TwitterFactory

    - by Philipp Andre
    Hello Guys, i'm trying to access twitter via oauth. Therefore, i registered my app, downloaded Twitter4j, added the jars in my Eclipse-Project, and then tried to execute the following code: Twitter twitter = new TwitterFactory().getOAuthAuthorizedInstance("[key]","[secretKey]); RequestToken requestToken = twitter.getOAuthRequestToken(); System.out.println(requestToken.getAuthorizationURL()); But it raises the following exception: [Sat May 29 11:19:11 CEST 2010]Using class twitter4j.internal.logging.StdOutLoggerFactory as logging factory. [Sat May 29 11:19:11 CEST 2010]Use twitter4j.internal.http.alternative.HttpClientImpl as HttpClient implementation. Exception in thread "main" java.lang.AssertionError: java.lang.reflect.InvocationTargetException at twitter4j.internal.http.HttpClientFactory.getInstance(HttpClientFactory.java:71) at twitter4j.internal.http.HttpClientWrapper.<init>(HttpClientWrapper.java:59) at twitter4j.http.OAuthAuthorization.init(OAuthAuthorization.java:83) at twitter4j.http.OAuthAuthorization.<init>(OAuthAuthorization.java:74) at twitter4j.TwitterFactory.getOAuthAuthorizedInstance(TwitterFactory.java:112) at MainProgram.main(MainProgram.java:18) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at twitter4j.internal.http.HttpClientFactory.getInstance(HttpClientFactory.java:65) ... 5 more Caused by: java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultHttpClient at twitter4j.internal.http.alternative.HttpClientImpl.<init>(HttpClientImpl.java:63) ... 10 more I currently can't figure out why ... can you please give me some suggestions? Best regards Philipp

    Read the article

  • Post-loading : check if an image is in the browser cache

    - by Mathieu
    Short version question : Is there navigator.mozIsLocallyAvailable equivalent function that works on all browsers, or an alternative? Long version :) Hi, Here is my situation : I want to implement an HtmlHelper extension for asp.net MVC that handle image post-loading easily (using jQuery). So i render the page with empty image sources with the source specified in the "alt" attribute. I insert image sources after the "window.onload" event, and it works great. I did something like this : $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { $(this).attr("src", $(this).attr("alt")); }); }); The problem is : After the first loading, post-loaded images are cached. But if the page takes 10 seconds to load, the cached post-loaded images will be displayed after this 10 seconds. So i think to specify image sources on the "document.ready" event if the image is cached to display them immediatly. I found this function : navigator.mozIsLocallyAvailable to check if an image is in the cache. Here is what I've done with jquery : //specify cached image sources on dom ready $(document).ready(function() { var plImages = $(".postLoad"); plImages.each(function() { var source = $(this).attr("alt") var disponible = navigator.mozIsLocallyAvailable(source, true); if (disponible) $(this).attr("src", source); }); }); //specify uncached image sources after page loading $(window).bind('load', function() { var plImages = $(".postLoad"); plImages.each(function() { if ($(this).attr("src") == "") $(this).attr("src", $(this).attr("alt")); }); }); It works on Mozilla's DOM but it doesn't works on any other one. I tried navigator.isLocallyAvailable : same result. Is there any alternative?

    Read the article

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • C++ Serialization Clean XML Similar to XSTREAM

    - by disown
    I need to write a linux c++ app which saves it settings in XML format (for easy hand editing) and also communicates with existing apps through XML messages over sockets and HTTP. Problem is that I haven't been able to find any intelligent libs to help me, I don't particular feel like writing DOM or SAX code just to write and read some very simple messages. Boost Serialization was almost a match, but it adds a lot of boost-specific data to the xml it generates. This obviously doesn't work well for interchange formats. I'm wondering if it is possible to make Boost Serialization or some other c++ serialization library generate clean xml. I don't mind if there are some required extra attributes - like a version attribute, but I'd really like to be able to control their naming and also get rid of 'features' that I don't use - tracking_level and class_id for instance. Ideally I would just like to have something similar to xstream in Java. I am aware of the fact that c++ lacks introspection and that it is therefore necessary to do some manual coding - but it would be nice if there was a clean solution to just read and write simple XML without kludges! If this cannot be done I am also interested in tools where the XML schema is the canonical resource (contract first) - a good JAXB alternative to C++. So far I have only found commercial solutions like CodeSynthesis XSD. I would prefer open source solutions. I have tried gSoap - but it generates really ugly code and it is also SOAP-specific. In desperation I also started looking at alternative serialization formats for protobuffers. This exists - but only for Java! It really surprises me that protocol buffers seems to be a better supported data interchange format than XML. I'm going mad just finding libs for this app and I really need some new ideas. Anyone?

    Read the article

  • Replacement for deprecated SQL Server User Defined Type with a bound Rule and Default

    - by Adam Jones
    We have a User Defined Data Type of YesNo which has an which is an alias for char(1). The type has a bound Rule (must be Y or N) and a Default (N). The aim of this is that when any of the development team create a new field of type YesNo the rule and default are automatically bound to the new column. Rules and Defaults have been deprecated and won't be available in the next a future version of SQL Server, is there another way to achieve the same functionality? I should add that I'm aware that I could use CHECK and DEFAULT constraints to replicate the functionality of the bound Rule and Defalut objects, however these would have to be applied at each usage of the type, rather than getting the functionality 'for free' by using a UDT which has a bound Rule and Default. The post relates to a database that backs an existing application, rather than a new development, so I'm aware that our use of UDT's is less than optimal. I suspect the answer to the question is 'No', however normally when features are deprecated there's usually an alternative syntax that can be used as a drop in replacement so I wanted to pose the question in-case someone knew of an alternative.

    Read the article

  • Is it possible to write map/reduce jobs for Amazon Elastic MapReduce using .NET?

    - by Chris
    Is it possible to write map/reduce jobs for Amazon Elastic MapReduce (http://aws.amazon.com/elasticmapreduce/) using .NET languages? In particular I would like to use C#. Preliminary research suggests not. The above URL's marketing text suggests you have a "choice of Java, Ruby, Perl, Python, PHP, R, or C++", without mentioning .NET languages. This Amazon thread (http://developer.amazonwebservices.com/connect/thread.jspa?messageID=136051 -- "Support for C# / F# map/reducers") explicitly says that "currently Amazon Elastic MapReduce does not support Mono platform or languages such as C# or F#." The above suggests that it can't be done. I'm wondering if there are any workarounds, though. For example, can I modify the Elastic MapReduce machine image for my account, and install Mono on there? An alternative, suggested by Amazon FAQs "Using Other Software Required by Your Jar" (http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/index.html?CHAP_AdvancedTopics.html) and "How to Use Additional Files and Libraries With the Mapper or Reducer" (http://docs.amazonwebservices.com/ElasticMapReduce/latest/DeveloperGuide/index.html?addl_files.html), is to make the first step of the Map/Reduce job be to install Mono on the local instance. That sounds kind of inefficient, but maybe it could work? Maybe a saner alternative would be to try to forgo the convenience of Elastic MapReduce, and manually set up my own Hadoop cluster on EC2. Then I assume I could install Mono without difficulty.

    Read the article

  • How can I find out if two arguments are instances of the same, but unknown class?

    - by Ingmar
    Let us say we have a method which accepts two arguments o1 and o2 of type Object and returns a boolean value. I want this method to return true only when the arguments are instances of the same class, e.g.: foo(new Integer(4),new Integer(5)); Should return true, however: foo(new SomeClass(), new SubtypeSomeClass()); should return false and also: foo(new Integer(3),"zoo"); should return false. I believe one way is to compare the fully qualified class names: public boolean foo(Object o1, Object o2){ Class<? extends Object> c1 = o1.getClass(); Class<? extends Object> c2 = o2.getClass(); if(c1.getName().equals(c2.getName()){ return true;} return false; } An alternative conditional statement would be : if (c1.isAssignableFrom(c2) && c2.isAssignableFrom(c1)){ return true; } The latter alternative is rather slow. Are there other alternatives to this problem?

    Read the article

  • How to benchmark on multi-core processors

    - by Pascal Cuoq
    I am looking for ways to perform micro-benchmarks on multi-core processors. Context: At about the same time desktop processors introduced out-of-order execution that made performance hard to predict, they, perhaps not coincidentally, also introduced special instructions to get very precise timings. Example of these instructions are rdtsc on x86 and rftb on PowerPC. These instructions gave timings that were more precise than could ever be allowed by a system call, allowed programmers to micro-benchmark their hearts out, for better or for worse. On a yet more modern processor with several cores, some of which sleep some of the time, the counters are not synchronized between cores. We are told that rdtsc is no longer safe to use for benchmarking, but I must have been dozing off when we were explained the alternative solutions. Question: Some systems may save and restore the performance counter and provide an API call to read the proper sum. If you know what this call is for any operating system, please let us know in an answer. Some systems may allow to turn off cores, leaving only one running. I know Mac OS X Leopard does when the right Preference Pane is installed from the Developers Tools. Do you think that this make rdtsc safe to use again? More context: Please assume I know what I am doing when trying to do a micro-benchmark. If you are of the opinion that if an optimization's gains cannot be measured by timing the whole application, it's not worth optimizing, I agree with you, but I cannot time the whole application until the alternative data structure is finished, which will take a long time. In fact, if the micro-benchmark were not promising, I could decide to give up on the implementation now; I need figures to provide in a publication whose deadline I have no control over.

    Read the article

  • Exposing a service to external systems - How should I design the contract?

    - by Larsi
    Hi! I know this question is been asked before here but still I'm not sure what to select. My service will be called from many 3 party system in the enterprise. I'm almost sure the information the service will collect (MyBigClassWithAllInfo) will change during the products lifetime. Is it still a good idea to expose objects? This is basically what my two alternatives: [ServiceContract] public interface ICollectStuffService { [OperationContract] SetDataResponseMsg SetData(SetDataRequestMsg dataRequestMsg); } // Alternative 1: Put all data inside a xml file [DataContract] public class SetDataRequestMsg { [DataMember] public string Body { get; set; } [DataMember] public string OtherPropertiesThatMightBeHandy { get; set; } // ?? } // Alternative 2: Expose the objects [DataContract] public class SetDataRequestMsg { [DataMember] public Header Header { get; set; } [DataMember] public MyBigClassWithAllInfo ExposedObject { get; set; } } public class SetDataResponseMsg { [DataMember] public ServiceError Error { get; set; } } The xml file would look like this: <?xml version="1.0" encoding="utf-8"?> <Message>   <Header>     <InfoAboutTheSender>...</InfoAboutTheSender>   </Header>   <StuffToCollectWithAllTheInfo>   <stuff1>...</stuff1> </StuffToCollectWithAllTheInfo> </Message> Any thought on how this service should be implemented? Thanks Larsi

    Read the article

  • Load random swf object with jquery

    - by Eggnog
    I'm working on a site that utilizes full screen background video. I have three flash videos I'd like to load into the page which would be displayed randomly on page refresh. I know you can achieve this through action script loading the movies into one main .swf, but I'm rubbish with Flash and am looking for an alternative solution. My search so far has uncovered a method using jquery. Unfortunately my attempts to implement it have failed. I'm hoping someone more knowledgeable than myself can tell me if this technique is valid or what I'm doing wrong. Here's my code to date: <script type="text/javascript" src="files/js/swfobject.js"></script> <script type="text/javascript"> $(document).ready(function() { var sPath; var i = Math.floor(Math.random()*2); switch(i) { case 0: sPath = "flash/homepage_video.swf"; break; case 1: sPath = "flash/homepage_video.swf"; break; } // load the flash version of the image var so = new SWFObject(sPath, "flash-background", "100%", "100%"); so.addParam("scale", "exactFit"); so.addParam("movie", sPath); // NOTE: the sPath variable is also used here so.addParam("quality", "high"); so.addParam("wmode", "transparent"); so.write("homeBackground"); // NOTE: The value in this call to write() MUST match the name of the // HTML element (<div>) you're expecting the swf to show up in }); </script> <body> <div id="homeBackground"> <p>This is alternative content</p> </div> </body> I'm open to other suggestions that don't involve jquery (php perhaps). Thanks.

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Strategies for Synchronizing Data Between a Rails App and iPhone App

    - by jessecurry
    I've written many iPhone Applications that have pulled data from web services and I've worked on synchronizing data between an iPhone App and a Web Application, but I've always felt that there is probably a better way to handle the synchronization. I'd like to know what strategies you have used to synchronize data between your iPhone(read: mobile) Apps and your Rails(read: web) Applications. Are there any strategies that scale particularly well? How have you dealt with large amounts of data? (Do you use paged responses?) How do you make sure that data is not overwritten? Is there a reason to avoid Ruby on Rails? if so, can you suggest an alternative? What is better about the alternative? What strategies have failed? Why do you believe that those strategies failed? I would like to be able to keep all of the data modifications on the server, but the particular application I am about to start work on will need the ability to operate while disconnected from the network. The user will be able to update data on the mobile device and update data through the web application. When the user's mobile device connects to the server any local changes will be pushed to the server.

    Read the article

  • Give Gridview cell font colour according to condition

    - by vim
    How can I give colour to gridview's cell text according to condition. I have a below gridview and I am taking data in gridview by code behind. <asp:GridView ID="GridView3" runat="server" AllowPaging="true" PageSize="5" AutoGenerateColumns="false" Width="100%" OnPageIndexChanging="GridView3_PageIndexChanging" CssClass="Grid"> <RowStyle CssClass="GridRow"/> <Columns> <asp:BoundField HeaderText="No" DataField="id" Visible="false"/> <asp:BoundField HeaderText="Scenario" DataField="Scenario"/> <asp:BoundField HeaderText="Type" DataField="Type"/> <asp:BoundField HeaderText="Station Name" DataField="StationName"/> <asp:BoundField HeaderText="Parameter" DataField="PARAM"/> <asp:BoundField HeaderText="Value" DataField="Value" SortExpression="Value" DataFormatString="{0:F2}"/> </Columns> <PagerStyle BackColor="White" Height="40px" Font-Bold="true" Font- Size="Medium" ForeColor="Green" HorizontalAlign="Center"/> <PagerSettings FirstPageText="First" LastPageText="Last" Mode="NumericFirstLast" PageButtonCount="3" /> <HeaderStyle BackColor="#ABDB78" ForeColor="Black" Height="35px" Font- Size="13px" Font-Names="verdana"/> </asp:GridView> I am binding data with this grid by code behind. protected void ReservGridBind() { string name = Request.QueryString[1].ToString(); string query = "SELECT SD.id,SD.Scenario,SD.Value,PR.Type,PR.StationName,PR.PARAM from sgwebdb.param_reference as PR Inner join sgwebdb.scenario_data as SD ON PR.Param_Id=SD.Param_Id INNER JOIN sgwebdb.qualicision_detail as Q ON SD.SCENARIO=Q.Alternative where PR.Type='Reservoirs' and Q.Alternative='" + name +"'"; this.GridView1.DataSource = PSI.DataAccess.Database.DatabaseManager .GetConnection().GetData(query); GridView1.DataBind(); condition : if text < 0 then blue colour , text > 0 && text < 4 then green colour, text > 4 then red colour. Please help me for this gridview's cell text colour.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >