Search Results

Search found 1666 results on 67 pages for 'lurker indeed'.

Page 20/67 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Cannot run update due to a dpkg error with burg-theme-minimal-sir

    - by boywithaxe
    I cannot run an update or indeed run $: apt-get remove due to a dpkg error with a package that's a part of super-boot-manager. Running an update returns: dpkg: error processing burg-theme-minimal-sir (--configure): subprocess installed post-installation script returned error exit status 1 I tried removing this package alone, with the same error, also trying to remove super-boot-manager returns: (Reading database ... 225474 files and directories currently installed.) Removing burg-theme-minimal-sir ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing burg-theme-minimal-sir (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing super-boot-manager ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: burg-theme-minimal-sir E: Sub-process /usr/bin/dpkg returned an error code (1) I'm sort of stuck now and Google has failed me. Has anyone encountered this problem before? Or does anyone know a way for fixing this?

    Read the article

  • How to use shared_ptr for COM interface pointers

    - by Seefer
    I've been reading about various usage advice relating to the new c++ standard smart pointers unique_ptr, shared_ptr and weak_ptr and generally 'grok' what they are about when I'm writing my own code that declares and consumes them. However, all the discussions I've read seem restricted to this simple usage situation where the programmer is using smart in his/her own code, with no real discussion on techniques when having to work with libraries that expect raw pointers or other types of 'smart pointers' such as COM interface pointers. Specifically I'm learning my way through C++ by attempting to get a standard Win32 real-time game loop up and running that uses Direct2D & DirectWrite to render text to the display showing frames per second. My first task with Direct2D is in creating a Direct2D Factory object with the following code from the Direct2D examples on MSDN: ID2D1Factory* pD2DFactory = nullptr; HRESULT hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); pD2DFactory is obviously an 'out' parameter and it's here where I become uncertain how to make use of smart pointers in this context, if indeed it's possible. My inexperienced C++ mind tells me I have two problems: With pD2DFactory being a COM interface pointer type, how would smart_ptr work with the Add() / Release() member functions for a COM object instance? Are smart pointers able to be passed to functions in situations where the function is using an 'out' pointer parameter technique? I did experiment with the alternative of using _com_ptr_t in the comip.h header file to help with pointer lifetime management and declared the pD2DFactory pointer with the following code: _com_ptr_t<_com_IIID<pD2DFactory, &__uuidof(pD2DFactory)>> pD2DFactory = nullptr; and it appears to work so far but, as you can see, the syntax is cumbersome :) So, I was wondering if any C++ gurus here could confirm whether smart pointers are able to help in cases like this and provide examples of usage, or point me to more in-depth discussions of smart pointer usage when needing to work with other code libraries that know nothing of them. Or is it simply a case of my trying to use the wrong tool for the job? :)

    Read the article

  • Low hanging fruit where "a sufficiently smart compiler" is needed to get us back to Moore's Law?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for?

    Read the article

  • How to fix sound in wolfenstein Enemy Territory

    - by GrizzLy
    I installed wolf:et, and i cant get sound to work. Everything that i have installed is in default paths, i had 10.4 and then upgraded to 10.10 via software update gui. I had sound working in 10.04 with method under 2. I have tried following killall esd; et; esd with that i get ------- sound initialization ------- /dev/adsp: No such file or directory Could not open /dev/adsp ------------------------------------ sudo -i echo "et.x86 0 0 direct" /proc/asound/card0/pcm0p/oss echo "et.x86 0 0 disable" /proc/asound/card0/pcm0c/oss exit with that i get bash: /proc/asound/card0/pcm0p/oss: No such file or directory and indeed i do not have that, i have only sub0 and sub1 in pcm0p I have tried running et with et-sdl-sound script, but with that i get this output in console http://pastebin.com/J7gRU1uh I have probably messed up sdl libraries, could not get sound to work, so downloaded new from debian package site and installed them. Tried setting SDL_AUDIODRIVER="pulse" in et-sdl-sound, looks like i am getting same error as in method 3. pasuspender -- et +set s_alsa_pcm plughw:0 gives me ------- sound initialization ------- /dev/adsp: No such file or directory Could not open /dev/adsp _------------------------------------ Misc: @Oli: i do not know if i am running pulse or esd, how can i check that?

    Read the article

  • Gmail Doesn't Like My Cron

    - by Robery Stackhouse
    You might wonder what *nix administration has to do with maker culture. Plenty, if *nix is your automation platform of choice. I am using Ubuntu, Exim, and Ruby as the supporting cast of characters for my reminder service. Being able to send yourself an email with some data or a link at a pre-selected time is a pretty handy thing to be able to do indeed. Works great for jogging my less-than-great memory. http://www.linuxsa.org.au/tips/time.html http://forum.slicehost.com/comments.php?DiscussionID=402&page=1 http://articles.slicehost.com/2007/10/24/creating-a-reverse-dns-record http://forum.slicehost.com/comments.php?DiscussionID=1900 http://www.cyberciti.biz/faq/how-to-test-or-check-reverse-dns/ After going on a huge round the world wild goose chase, I finally was told by someone on the exim-users list that my IP was in a range blocked by Spamhaus PBL. Google said I need an SPF record http://articles.slicehost.com/2008/8/8/email-setting-a-sender-policy-framework-spf-record http://old.openspf.org/wizard.html http://www.kitterman.com/spf/validate.html The version of Exim that I could get from the Ubuntu package manager didn't support DKIM. So I uninstalled Exim and installed Postfix https://help.ubuntu.com/community/UbuntuTime http://www.sendmail.org/dkim/checker

    Read the article

  • Restore audio settings - cannot open mixer: No such file or directory

    - by Alfred M.
    The internal speaker of my laptop never functionned under Ubuntu. I tried to follow indication on the web and now the jack audio does not work either. The graphic interface for audio management now displays a 'dummy output' instead of the three possible outputs I used to have (one of them was working for the jack output). In a terminal alsamixer raises an error: cannot open mixer: No such file or directory I did try to remove and reinstall alsa-utils but it did not change anything. This happened after a failed atempt to install alsa-driver-linuxant_1.0.23.1_all.deb from here. My sound card seems to be not recognised anymore. After reboot I have no more the sound icon in menu bar the upper right corner. I think I have removed my sound card driver. Indeed, the command sudo lshw -class multimedia indicated audi device as unclaimed. Any idea how I could revert to a better situation (that is jack support and alsa working)? EDIT: The command lspci -nnk | grep -iEA3 audio gives lspci -nnk | grep -iEA3 audio 00:1b.0 Audio device [0403]: Intel Corporation 82801I (ICH9 Family) HD Audio Controller [8086:293e] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:1893] 00:1c.0 PCI bridge [0604]: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 [8086:2940] (rev 03)

    Read the article

  • Spotlight on mkyong

    - by MarkH
    Occasionally, I'd like to share a blog I've discovered or that someone has passed along to me. Criteria are few, but in a nutshell, it must be: Java-related. (Doh!) Interesting. A good blog is exciting to read at some level, whether due to perspective, eye-catching writing, or technical insight. It doesn't have to read like a Stephen King novel, but it should grab you somehow. Technically deep or technically broad. A site that dives deeply, quickly is a great reference for particular topics/tasks. On the other hand, one that covers a lot of ground at a high-but-still-technical level can be a handy site to visit occasionally as well. Both are what I consider "bookmarkable", but for different reasons. Drumroll, please... With that in mind, this Blog Spotlight is cast upon mkyong.com, a site I stumbled across that offers a little bit of everything for various Java dev audiences. The title indicates the site is for "Java web development tutorials", and indeed it does have these: JSF, Spring, Struts, Hibernate, JAX-WS, JAX-RS, and numerous other topics are addressed to varying degrees. The site isn't devoted exclusively to server-side tutorials, though. Recent posts include mobile development topics, and the links at the bottom of the page connect you to reference pages and other useful sites. I've poked around through a couple of the tutorials and, while they won't take you from "zero to hero", they do seem to provide a nice overview of the subject at hand. They also offer an occasional explanatory comment that is missing from far too many texts, sites, and doc pages. It's not a perfect site, but I like it. The Bottom Line mkyong.com offers a nice "summary site" of server-side tutorials, mobile dev posts, and reference links. Check it out! All the best,Mark 

    Read the article

  • SQL SERVER – Follow up – Usage of $rowguid and $IDENTITY

    - by pinaldave
    The most common question I often receive is why do I blog? The answer is even simpler – I blog because I get an extremely constructive comment and conversation from people like DHall and Kumar Harsh. Earlier this week, I shared a conversation between Madhivanan and myself regarding how to find out if a table uses ROWGUID or not? I encourage all of you to read the conversation here: SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables. In simple words the conversation between Madhivanan and myself brought out a simple query which returns the values of the UNIQUEIDENTIFIER  without knowing the name of the column. David Hall wrote few excellent comments as a follow up and every SQL Enthusiast must read them first, second and third. David is always with positive energy, he first of all shows the limitation of my solution here and here which he follows up with his own solution here. As he said his solution is also not perfect but it indeed leaves learning bites for all of us – worth reading if you are interested in unorthodox solutions. Kumar Harsh suggested that one can also find Identity Column used in the table very similar way using $IDENTITY. Here is how one can do the same. DECLARE @t TABLE ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, IDENTITYCL INT IDENTITY(1,1), data VARCHAR(60) ) INSERT INTO @t (data) SELECT 'test' INSERT INTO @t (data) SELECT 'test1' SELECT $rowguid,$IDENTITY FROM @t There are alternate ways also to find an identity column in the database as well. Following query will give a list of all column names with their corresponding tablename. SELECT SCHEMA_NAME(so.schema_id) SchemaName, so.name TableName, sc.name ColumnName FROM sys.objects so INNER JOIN sys.columns sc ON so.OBJECT_ID = sc.OBJECT_ID AND sc.is_identity = 1 Let me know if you use any alternate method related to identity, I would like to know what you do and how you do when you have to deal with Identity Column. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How do I speed up XML parsing operation?

    - by absentx
    I currently have a php script set up to do some xml parsing. Sometimes the script is set as an on page include and other times it is accessed via an ajax call. The problem is the load time for this particular page is very long. I started to think that the php I had written to find what I need in the XML was written poorly and my script is very resource intense. After much research and testing the problem is indeed not my scripting (well perhaps you could consider it a problem with my scripting), but it looks like it takes a long time to load the particular xml sources. My code is like such: $source_recent = 'my xml feed'; $source_additional = 'the other feed I need'; $xmlstr_recent = file_get_contents($source_recent); $feed_recent = new SimpleXMLElement($xmlstr_recent); $xmlstr_additional = file_get_contents($source_additional); $feed_additional = new SimpleXMLElement($xmlstr_additional); In all my testing, the above code is what takes the time, not the additional processing I do below. Is there anyway around this or am I at the mercy of the load time of the xml URL's? One crazy thought I had to get around it is to load the xml contents into a db every so often, then just query the db for what I need. Thoughts? Ideas?

    Read the article

  • Installing LBP 2900 ubuntu -> libs folders wrong?

    - by Peter Smit
    I am trying to get my Canon LBP2900 printer to work on Ubuntu 11.10 64 bit. What I have done is try to follow the steps on https://help.ubuntu.com/community/CanonCaptDrv190 So I downloaded the version 2.3 driver and tried to convert the rpm files to debian and installed them sudo alien cndrvcups-capt-2.30-1.x86_64.rpm cndrvcups-common-2.30-1.x86_64.rpm sudo dpkg -i cndrvcups-capt-2.30-1.x86_64.deb cndrvcups-common-2.30-1.x86_64.deb restarted cups and try to install the printer with lpadmin: sudo service cups restart sudo /usr/sbin/lpadmin -p LBP2900 -m /usr/share/cups/model/CNCUPSLBP2900CAPTK.ppd -v ccp://localhost:59787 -E What I noticed however that on the step with lpadmin it goes wrong with the error: lpadmin: Bad device-uri scheme "ccp" After trying to trace what has gone wrong, I think I nailed it to the fact that dpkg installed a file /usr/lib64/cups/backend/ccp instead of /usr/lib/cups/backend/ccp Checking the original rpm with archive manager shows indeed that /usr/lib and /usr/lib64 are used, with the backend/cpp file only installed in lib64. As I understand correctly, Ubuntu 11.10 uses /usr/lib32 and /usr/lib instead so the files are installed in the wrong place. Is there an automated method of converting the rpm/deb files with the wrong lib structure to one with the right lib structure for ubuntu 11.10? Or am I completely on the wrong track for getting my printer installed?

    Read the article

  • Microsoft , Hotmail , Live , MSN, Outlook , unable to send emails and no support received from microsoft in 3 months we are trying asking for that

    - by HugeNut
    Ok this is somenthing unbelievable, we have a website, users sign up and receives links to confirm they signed up BUT: 1 - microsoft blocked our IP (no one with microsoft email account can receive our emails) 2 - we tryed contacting microsoft submitting the detailed form about our problem 3 - we posted 3 times in their community about our problem 4 - we tweeted they about our problem 5 - we tryed finding out some telephone support number (the few there are arent' helping at all) Do you think we solved? the answer is NO :/ We still unable to send emails from our IP to microsoft email accounts, since 3 months back. Our emails are perfect we checked all the email headers following microsoft guidelines but it seems not enought, checking our IP reputation it seems everythings ok, indeed we can send email easly to any other provider , gmail, yahoo, etc Do you know any other way to try to get help ? FULL ERROR RETURNED BY MICROSOFT: host mx1.hotmail.com[65.55.37.120] said: 550 SC-001 (COL0-MC4-F28) Unfortunately, messages from 94.23.***** weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command) We are running NGIX + php mailer from a Virtual Private Server (No Hosting or shared hosting)

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • Inheritance, commands and event sourcing

    - by Arthis
    In order not to redo things several times I wanted to factorize common stuff. For Instance, let's say we have a cow and a horse. The cow produces milk, the horse runs fast, but both eat grass. public class Herbivorous { public void EatGrass(int quantity) { var evt= Build.GrassEaten .WithQuantity(quantity); RaiseEvent(evt); } } public class Horse : Herbivorous { public void RunFast() { var evt= Build.FastRun; RaiseEvent(evt); } } public class Cow: Herbivorous { public void ProduceMilk() { var evt= Build.MilkProduced; RaiseEvent(evt); } } To eat Grass, the command handler should be : public class EatGrassHandler : CommandHandler<EatGrass> { public override CommandValidation Execute(EatGrass cmd) { Contract.Requires<ArgumentNullException>(cmd != null); var herbivorous= EventRepository.GetById<Herbivorous>(cmd.Id); if (herbivorous.IsNull()) throw new AggregateRootInstanceNotFoundException(); herbivorous.EatGrass(cmd.Quantity); EventRepository.Save(herbivorous, cmd.CommitId); } } so far so good. I get a Herbivorous object , I have access to its EatGrass function, whether it is a horse or a cow doesn't matter really. The only problem is here : EventRepository.GetById<Herbivorous>(cmd.Id) Indeed, let's imagine we have a cow that has produced milk during the morning and now wants to eat grass. The EventRepository contains an event MilkProduced, and then come the command EatGrass. With the CommandHandler, we are no longer in the presence of a cow and the herbivorious doesn't know anything about producing milk . what should it do? Ignore the event and continue , thus allowing the inheritance and "general" commands? or throw an exception to forbid execution, it would mean only CowEatGrass, and HorseEatGrass might exists as commands ? Thanks for your help, I am just beginning with these kinds of problem, and I would be glad to have some news from someone more experienced.

    Read the article

  • SAP or Navision? Career Path

    - by codebased
    This could be tricky to ask; I may or may not ask this question here but I thought to give it a try. I've been in Software Industry since 2002 and now it has been a time that I'm at Senior level where I normally code, lead and define the architect; giving technical solutions to the management is one of my asset that I've earned during my services. Now it is the time to define the road map for the future, $$$. I am not in favor of Project Management roles. I've been thinking of going through the ERP and my current company does provide me an option to go for Navision/ Microsoft Dynamics. They are currently on 4.0 but they are planning to move for 2009 and also to build one of their own plug-in. Indeed the option is good because Microsoft is trying to accomplish the market for Dynamics products. However, they have less success in Australia. Now, Another option is with SAP where person can go with 200 K $ a year. Where as I'd doubt that if the same kind of growth, financial, is available for Microsoft geek. What is your opinion on Navision or SAP? If I try to completely move to SAP it could be bit challenging as market will consider me a fresher. However the return is quite good. Where in case of Microsoft, I think technology changes so fast that there is a less chance to grow in, within, the same experience; in other word, if any new framework comes in .net then market look for that person who knows this new framework and not .net But in case of SAP, where the base remain same and chances are to grab more money from the market. What would you do if you were me? In stackoverflow - Navision questions are 20+ where in SAP 200+///?? :-)

    Read the article

  • Installing LBP 2900 printer -> libs folders wrong?

    - by Peter Smit
    I am trying to get my Canon LBP2900 printer to work on Ubuntu 11.10 64 bit. What I have done is try to follow the steps on https://help.ubuntu.com/community/CanonCaptDrv190 So I downloaded the version 2.3 driver and tried to convert the rpm files to debian and installed them sudo alien cndrvcups-capt-2.30-1.x86_64.rpm cndrvcups-common-2.30-1.x86_64.rpm sudo dpkg -i cndrvcups-capt-2.30-1.x86_64.deb cndrvcups-common-2.30-1.x86_64.deb restarted cups and try to install the printer with lpadmin: sudo service cups restart sudo /usr/sbin/lpadmin -p LBP2900 -m /usr/share/cups/model/CNCUPSLBP2900CAPTK.ppd -v ccp://localhost:59787 -E What I noticed however that on the step with lpadmin it goes wrong with the error: lpadmin: Bad device-uri scheme "ccp" After trying to trace what has gone wrong, I think I nailed it to the fact that dpkg installed a file /usr/lib64/cups/backend/ccp instead of /usr/lib/cups/backend/ccp Checking the original rpm with archive manager shows indeed that /usr/lib and /usr/lib64 are used, with the backend/cpp file only installed in lib64. As I understand correctly, Ubuntu 11.10 uses /usr/lib32 and /usr/lib instead so the files are installed in the wrong place. Is there an automated method of converting the rpm/deb files with the wrong lib structure to one with the right lib structure for ubuntu 11.10? Or am I completely on the wrong track for getting my printer installed?

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Logarithmic spacing of FFT bins

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • Is it possible to render and style a <title> element from within the <head> of an html document?

    - by Brian Z
    Is it possible to render and style a <title> element from within the <head> of an html document? I thought it was impossible to render information from the <head>, but the system status page for 37signals.com seems to be doing just that - http://status.37signals.com/. If you inspect the element at the very top of the page, the text that reads "37signals System Status", you'll see that the part of the DOM that is generating the text is the <head>'s <title>, and the css is as follows: title { display: block; margin: 10px auto; max-width: 840px; width: 100%; padding: 0 20px; float: left; color: black; text-rendering: optimizelegibility; -moz-box-sizing: border-box; box-sizing: border-box; } Can someone confirm that the <title> info from the <head> is indeed what is being rendered? If so, can someone point to documentation that defines this capability as I have not found any? I have applied the above css to an html document on my local web server using the same browser (chromium, os x 10.8.5) as the 37signals site was viewed on, yet my file did not display the <head>'s <title>.

    Read the article

  • Chrome "refusing to execute script"

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plaintext file? It clearly has a js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still, same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get the Chrome to recognize that the linked file is actually a javascript type?

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Missing X-Spam-Status header

    - by Walt Stoneburner
    I recently upgraded to Ubuntu 14.04.1 LTS (trusty) and have followed the directions in https://help.ubuntu.com/14.04/serverguide/mail-filtering.html and am sending and receiving mail just fine. While I do see X-Virus-Scanned headers in my messages, which suggests mail is indeed being processed, I do not see any X-Spam-Level or X-Spam-Score headers being added to messages. This makes downstream procmailrc and client-side filtering ...more difficult. While having $final_spam_destiny = D_DISCARD in /etc/amavis/conf.d/20-debian_defaults does greatly reduce spam to my inbox, I had concerns of false-positives prior to tuning and didn't know were there going, so have set it to D_PASS for the time being. This exposed the problem. I'm not sure where to look to start diagnosing the problem (otherwise I'd post a suspect configuration file). /etc/amavis/conf.d/15-content_filter_mode has the lines uncommented to enable virus and spam checks, and virus checking appears to be working according to the headers. Spam Assassin certainly seems to be starting just fine, too. SpamAssassin debug facilities: info SA info: zoom: able to use 360/360 'body_0' compiled rules (100%) SpamAssassin loaded plugins: AskDNS, AutoLearnThreshold, Bayes, BodyEval, Check, DKIM, DNSEval, FreeMail, HTMLEval, HTTPSMismatch, Hashcash, HeaderEval, ImageInfo, MIMEEval, MIMEHeader, Pyzor, Razor2, RelayEval, ReplaceTags, Rule2XSBody, SPF, SpamCop, URIDNSBL, URIDetail, URIEval, VBounce, WLBLEval, WhiteListSubject SpamControl: init_pre_fork on SpamAssassin done I've also set $log_level = 2; in /etc/amavis/conf.d/50-user and don't see any obvious errors rolling by in the logs. Q: Any recommendations of what to try next? UPDATE (it appears that I have the right setting already): /etc/amavis/conf.d$ grep sa_tag_level_deflt * 20-debian_defaults:# $sa_tag_level_deflt = 2.0; # add spam info headers if at, or above that level 20-debian_defaults:$sa_tag_level_deflt = -999; # add spam info headers if at, or above that level

    Read the article

  • Wrong package for Idle-python2.7?

    - by adelval
    I'm running python 2.7 in Ubuntu 3.10, and idle (idle-python2.7) has stopped working. Whenever I try to open a file in a editor window, it is blank, though the file does exist and is not empty/blank. Furthermore, it is not possible to close idle after this, except via a terminal kill command. Idle was working fine before. The problem appeared after I installed a number of things, including idlex, various scipy modules and mpmath, but after trying to repair it in several ways, it seems to be caused by Ubuntu's official idle package. I get this error in the terminal when trying to open a file in idle: Exception in Tkinter callback [...lines ommitted for brevity...] File "/usr/lib/python2.7/idlelib/IOBinding.py", line 129, in coding_spec for line in lst: NameError: global name 'lst' is not defined If you look at the code, it looks like an obvious bug: indeed lst is not defined in the function coding_spec. Furthermore, the source file IOBinding.py in http://fossies.org/dox/Python-2.7.5/IOBinding_8py_source.html is different and doesn't show the problem. Thinking that one of the recently packages had overwritten the file somehow, I've tried a number of things, including reinstalling all Python packages from synaptic, but the wrong IOBinding.py is still there. The reason I think the problem lies with the package itself is that I finally did sudo apt-get remove idle, verified that the idlelib directory was empty, and reinstalled with sudo apt-get install idle; but the wrong IOBinding.py file came back again. I can in fact make idle work again by simply replacing lst by str in the code, but to me that's clearly a no-no. I'm not to happy either about trying to replace just that file from the source python distribution, as other files may be wrong. I want to get the right files from the official package.

    Read the article

  • Oracle Flashback Technology - Webcast 9th June 2010

    - by Alex Blyth
    Hi All Here are the details for webcast on Oracle Flashback Technologies on Wednesday (9th June 2010) beginning at 1.30pm (Sydney, Australia Time). The Oracle Database architecture leverages the unique technological advances in the area of database recovery due to human errors. Oracle Flashback Technology provides a set of new features to view and rewind data back and forth in time. The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past! Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. It allows users to view the state of data at a point in time in the past without requiring any structural changes to the database. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Flashback Technology revolutionizes recovery by operating just on the changed data. The time it takes to recover the error is now equal to the same amount of time it took to make the mistake. Oracle 10g Flashback Technologies includes Flashback Database, Flashback Table, Flashback Drop, Flashback Versions Query, and Flashback Transaction Query. Flashback technology can just as easily be utilized for non-repair purposes, such as historical auditing with Flashback Query and undoing test changes with Flashback Database. Oracle Database 11g introduces an innovative method to manage and query long-term historical data with Flashback Data Archive. This release also provides an easy, one-step transaction backout operation, with the new Flashback Transaction capability. Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690835Conference Key: flashbackEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call) Audio details: NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613)Meeting ID: 7914841Meeting Passcode: 09062010 Talk to you all Wednesday 9th June Alex

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >