Search Results

Search found 2880 results on 116 pages for 'deep linking'.

Page 55/116 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • What would cause the "gi" module to be missing from python?

    - by Catalin Dumitru
    After some not so clever editing of the default Python version in Ubuntu, from 2.7 to 3.2, I ended up breaking my entire system. After my computer imploded and everything stopped working, I tried to revert back my changes (by linking /usr/bin/python2.7 to /usr/bin/python and changing the default version in /usr/share/python/debian_defaults back to 2.7) but some things are still broken. For example when I type "import gi" in the python interpreter I get the fallowing message : >>> import gi Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named gi >>> error which appears with some programs too (eg: gnome tweak tool). I have tried re-installing python both from the software center and from sources, but the same error persists. Python -- version now returns : Python 2.7.2 and also some software packages which depend on python 2.7 are now working (for example the software center), but some things are still broken. Is there anything I can do to completely re-install python 2.7 as the default version?

    Read the article

  • How to find location of installed library

    - by Raven
    Background: I'm trying to build my program but first I need to set up libraries in netbeans. My project is using GLU and therefore I installed libglu-dev. I didn't note location where the libraries were located and now I can't find them.. I've switched to Linux just a few days ago and so far I'm very content with it, however I couldn't google this one out and becoming frustrated.. Is there way to find out where files of package were installed without running installation again? I mean if I got library xxx and installed it some time ago, is there somecommand xxx that will print this info? I've already tried locate, find and whereis commands but either I'm missing something or I just can't do it correctly.. for libglu, locate returns: /usr/share/bug/libglu1-mesa /usr/share/bug/libglu1-mesa/control /usr/share/bug/libglu1-mesa/script /usr/share/doc/libglu1-mesa /usr/share/doc/libglu1-mesa/changelog.Debian.gz /usr/share/doc/libglu1-mesa/copyright /usr/share/lintian/overrides/libglu1-mesa /var/lib/dpkg/info/libglu1-mesa:i386.list /var/lib/dpkg/info/libglu1-mesa:i386.md5sums /var/lib/dpkg/info/libglu1-mesa:i386.postinst /var/lib/dpkg/info/libglu1-mesa:i386.postrm /var/lib/dpkg/info/libglu1-mesa:i386.shlibs Other two commands fail to find anything. Now locate did it's job but I'm sure none of those paths is where the library actually resides (at least everything I was linking so far was in /usr/lib or usr/local/lib). The libglu was introduced just as example, I'm looking for general solution for this problem.

    Read the article

  • EPPM Webcast Series Part II: Build – Consistently delivering successful projects to ensure financial success

    - by Sylvie MacKenzie, PMP
    Oracle Primavera invites you to the second in a series of three webcasts linking Enterprise Project Portfolio Management with enhanced operational performance and better financial results. Join us for the next installment of our 'Plan, Build, Operate' webcast series, as we look to address the challenges organizations face during the execution phase of their projects. Webcast II: Build – Consistently delivering successful projects to ensure financial success. This webcast will look at Three key questions: How do you maintain consistency in delivery whilst maintaining visibility and control? How do you deal with project risk and mitigation strategies? How do you ensure accurate reporting? Hear from Geoff Roberts, Industry Strategist from Oracle Primavera. Geoff will look at how solutions can help to address the challenges around: Visibility and governance Communication and complex coordination Collecting and reporting progress Measuring and reporting It is imperative that organizations understand the impact projects can have on their business. Attend this webcast and understand how consistently delivering successful projects is vital to the financial success of an asset intensive organisation. Register today! Please forward this invite to your colleagues who you think may benefit from attending.

    Read the article

  • #altnetseattle &ndash; MEF, What is it?

    - by GeekAgilistMercenary
    I dived into the MEF session with Glenn Block, Sourabh Mathur, Brian Henderson, and others.  Glenn covered the basic architectural ideas of MEF and then dived into a few examples. Is a framework around decoupling components. Built around the idea of discoverable type systems. Traditional extensibility mechanisms have a host and the respective extensions, commonly linking these two aspects with a form of registration. MEF removes the need for the registration part of the architecture and uses a contract. At some point with MEF you get down to parts, which removes even the complexity of a host or extensions, but a truly evolvable architecture based on natural growth of parts. Also referred to as the framework that removes the "new" keyword. The idea is that parts pull together other parts that they need.  Between each part is a contract. Each part has imports or exports for the parts it needs or the things it offers. If one checks out the MEF Codeplex Site you will find a host of additional information.  The framework download also has some decent examples that help one get kick started.

    Read the article

  • How should I manage persistent score in Game Center leaderboards?

    - by Omega
    Let's say that I'm developing an iOS RPG where the player gains 1 point per monster kill. The amount of monsters killed is persistent data: it is an endless adventure, and the score keeps on growing. It isn't a "session score" like Fruit Ninja, but rather a "reputation score". There are Game Center leaderboards for that score. Keep killing monsters, your score goes up, and the leaderboards are updated. My problem is that, technically, you can log out and log in using a different Game Center account, kill one monster, and the leaderboards will be updated for the new GC account. Supposing that this score is a big deal, this could be considered as cheating, because if you have a score of 2000, any of your friends who have never played the game can simply log into your iPhone, play the game, and the system will update the score for their accounts, essentially giving them 2000 points in the leaderboards for doing nothing. I have considered linking one GC account to a specific save game. It won't update your score unless you're using the linked GC account. But what if the player actually needs to change their GC account? Technically they would be forced to start a new game and link their account to that profile. How should I prevent this kind of cheat? Essentially, I don't want someone to distribute a high schore to multiple GC accounts, given the fact that the game updates the score constantly since it isn't a "session score". I do realize that it isn't quite a big deal. But I'm curious about how to avoid this.

    Read the article

  • Solaris 11 : les nouveautés vues par les équipes de développement

    - by Eric Bezille
    Translate in English  Pour ceux qui ne sont pas dans la liste de distribution de la communauté des utilisateurs Solaris francophones, voici une petite compilation de liens sur les blogs des développeurs de Solaris 11 et qui couvre en détails les nouveautés dans de multiples domaines.  Les nouveautés côté Desktop What's new on the Solaris 11 Desktop ? S11 X11: ye olde window system in today's new operating system Accessible Oracle Solaris 11 - released ! Les outils de développements Nagging As a Strategy for Better Linking: -z guidance Much Ado About Nothing: Stub Objects Using Stub Objects The Stub Proto: Not Just For Stub Objects Anymore elffile: ELF Specific File Identification Utility Le nouveau système de packaging : Image Packaging System (IPS) Replacing the Application Packaging Developer's guide IPS Self-assembly - Part 1: overlays Self Assembly - Part 2: Multiple Packages Delevering configuration La sécurité renforcée dans Solaris Completely disabling root logins in Solaris 11 Passwork (PAM) caching for Solaris su - "a la sudo" User home directory encryption with ZFS My 11 favorite Solaris 11 features (autour de la sécurité) - par Darren Moffat Exciting crypto advances with the T4 processor and Oracle Solaris 11 SPARC T4 OpenSSL Engine Solaris AESNI OpenSSL Engine for Intel Westmere Gestion et auto-correction d'incident - "Self-Healing" : Service Management Facility (SMF) & Fault Management Architecture (FMA)  Introducing SMF Layers Oracle Solaris 11 - New Fault Management Features Virtualisation : Oracle Solaris Zones These are 11 of my favorite things! (autour des zones) - par Mike Gerdts Immutable Zones on Encrypted ZFS The IPS System Repository (avec les zones) - par Tim Foster Quelques bonus de la communauté Solaris  Solaris 11 DTrace syscall Provider Changes Solaris 11 - hostmodel (Control send/receive behavior for IP packets on a multi-homed system) A Quick Tour of Oracle Solaris 11 Pour terminer, je vous engage également à consulter ce document de référence fort utile :  Transition from Oracle Solaris 10 to Oracle Solaris 11 Bonne lecture ! Translate in English 

    Read the article

  • Drive project success & financial performance with business critical Enterprise Project Portfolio Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle Primavera invites you to the first in a series of three webcasts linking Enterprise Project Portfolio Management with enhanced operational performance and better financial results. Few organizations fully understand the impact projects have on their business. Consistently delivering successful projects is vital to the financial success of an asset intensive organization. Enterprise Project Portfolio Management (EPPM) is not a new concept yet for many organizations it is not considered "business critical". Webcast 1: Plan – Aligning project selection and prioritization with corporate objectives This webcast will look at 2 key questions: Are you aligning portfolio decisions with strategic objectives? How do you effectively measure the success of your portfolio decisions? Hear from Accenture who'll present a compelling case for why asset intensive organizations should consider EPPM as business critical. They'll explore: How technology is being used to enhance project delivery How collaboration enhances delivery performance The major challenges associated with the planning phase of a project Next hear from Geoff Roberts, Industry Strategist from Oracle Primavera. With over 30 years experience in project management/project controls in the construction, utilities and oil & gas sectors, Geoff will investigate how EPPM is a best practice and can support an organization through project selection and prioritization ensuring that decisions are aligned with corporate objectives. Don’t miss out, register today!

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Why is sudo bash different from regular bash

    - by cyberjar09
    Problem description : I am using something called play framework in my development which requires me to make the python script play available in the path. Hence I create a symbolic link in /usr/local/bin ... Now I have written a shell script (call it status.sh) which calls this python script as follows : play status <some values here related to my app> &> /tmp/xyz.txt and this shell script then sends me the file via email. This works perfectly when I execute the script as follows ./script.sh. However when the script is executed as a cron expression everyday I get an output from stderr saying 'play: command not found'. Hence I did some digging on my own and here are my findings : echo $PATH when I am on the shell shows that I have /usr/local/bin available to me hence I can successfully execute the command play status however when I type in sudo bash and then echo $PATH I do not have the path /usr/local/bin anymore. It is a limited set of folders (one of them being /usr/bin). Q : Why this behavior ?! I fail to understand why the path is different. Also as a workaround would you suggest I do : new symbolic link from /usr/bin to /usr/local/bin (what are the side effects of this?) remove /usr/local/bin sym link altogether and only use /usr/bin is there a convention that I am not following here for linking new programs and executing them from $PATH ? Thanks.

    Read the article

  • Google Webmaster tools Incorrect rel-alternate-hreflang implementation warning message

    - by Noam
    I'm getting this warning msg. in Google webmaster tools Incorrect rel-alternate-hreflang implementation In particular, there seems to be a problem with missing or incorrect bi-directional linking (when page A links with hreflang to page B, there must be a link back from B to A as well). This msg. seems pretty straight forward, but when checking their example pages, I'm not finding anything wrong. I'm using alternate for translation of main site menu, titles, etc.. In each page I have this: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> <link rel="alternate" hreflang="th" href="http://th.mydomain.com/page" /> <link rel="alternate" hreflang="es" href="http://es.mydomain.com/page" /> <link rel="alternate" hreflang="pt" href="http://pt.mydomain.com/page" /> I've double checked this exists in all the 6 pages. This is the first time I've seen this msg although I've implemented this at least 6 months ago, and implementation hasn't changed. Is there any way to check a specific set of pages for these things? Am I missing something in my implementation? We're auto-redirecting people from a location to their specific language, and give them an option to manually change this. I've also just found out about the suggestion for Vary HTTP header - is that relevant and important here?

    Read the article

  • can't run sqldeveloper on Ubuntu

    - by nazar_art
    I tried to install sqldeveloper by following way: Download SQL Developer from Oracle website (I chose Other Platforms download). Extract file to /opt: sudo unzip sqldeveloper-*-no-jre.zip -d /opt/ sudo chmod +x /opt/sqldeveloper/sqldeveloper.sh Linking over an in-path launcher for Oracle SQL Developer: sudo ln -s /opt/sqldeveloper/sqldeveloper.sh /usr/local/bin/sqldeveloper Edit /usr/local/bin/sqldeveloper.sh replace it's content to: #!/bin/bash cd /opt/sqldeveloper/sqldeveloper/bin ./sqldeveloper "$@" Run SQL Developer: sqldeveloper But it shows next output: nazar@lelyak-desktop:/opt/sqldeveloper? ./sqldeveloper.sh Oracle SQL Developer Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved. LOAD TIME : 401# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f3b2dcacbe0, pid=20351, tid=139892273444608 # # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 1.7.0_65-b17) # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops) # Problematic frame: # C 0x00007f3b2dcacbe0 # # Core dump written. Default location: /opt/sqldeveloper/sqldeveloper/bin/core or core.20351 # # An error report file with more information is saved as: # /tmp/hs_err_pid20351.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # /opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1193: 20351 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[@]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[@]}" 134 nazar@lelyak-desktop:/opt/sqldeveloper? java -version java version "1.7.0_65" Java(TM) SE Runtime Environment (build 1.7.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) Here is content of /tmp/hs_err_pid20351.log How to solve this trouble?

    Read the article

  • Can inbound links through template-based layouts result in penalties?

    - by Liam Sorsby
    So obviously link building is encouraged as long as it is natural, organic and has meaningful links with content relevant to your site. Obviously with the constant release of new updates to algorithms, Google is flagging sites for unnatural links to their sites. My Question is: Can this be caused by templating systems? With WordPress for example, where you can add a link on the footer and it is repeated throughout the entire website generating thousands of links? If we don't add any links, Good Content will be re-posted and linked to, surely if your content is constantly linked to this will flag your site for "unnatural" content as it's difficult to see if someone has been paid to write an article on your content. Or does Google just simply want us to audit some of the links to show we are making an effort? As you can tell we have had a Manual action for: Unnatural links to your site—impacts links. However, this seems to impact our website as well. Edit: To clarify the question: Can you get penalised for paying for advertising on a site that uses a templated sidebar. So when they create a new blog/page ect your link is also added onto the page hence resulting in 1000's of links to one page on our site. I know that one effect maybe a 0 pagerank web page linking to your page dilutes the PR of our page. However the links are only inbound not reciprocal

    Read the article

  • links for 2011-03-01

    - by Bob Rhubart
    Oracle Technology Network Architect Day: Denver - March 23 This live one-day event will bring together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s architects regularly face. (tags: oracle otn architect entarch) Java.net Reborn (Oracle Technology Network Blog (aka TechBlog)) "The migration was huge effort. Over 1400 projects were migrated (and some 30 projects are left to go). A large part of the migration was a big cleanup of abandoned projects...The new java.net site is smaller, faster and now the percentage of good, current content is much higher." (tags: oracle otn java java.net) This Week: OTN Java Developer Day in Boston, Massachusetts (US) | Java.net "This Thursday, March 3, the Oracle Technology Network will be hosting an OTN Developer Day titled You are the future of Java in Boston, Massachusetts (US). The all-day event includes a keynote address ("Java, the Language of the Future") and four separate tracks..." (tags: oracle otn java event) A brief introduction to BRM and architecture (Red Adventure) Yani Miguel offers a primer on the architecture behind Billing and Revenue Management. (tags: oracle otn brm) SOA Suite Integration: Part 1: Building a Web Service (The Shorten Spot) Anthony Shorten's first post in a new series "will not feature SOA Suite at all, but will concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration." (tags: oracle otn SOA soasuite) Darwin-IT: VirtualBox on Windows XP Martien van den Akker shares a few tips. (tags: oracle otn virtualization virtualbox) Pas Apicella: Developing RESTful Web Services from JDeveloper 11g (11.1.1.4) Plas says: "In this example we use JDeveloper to create a basic JAX-RS Web Service from support provided within the code editor as no wizard support exists within JDeveloper 11g at this stage." (tags: oracle otn REST SOA) Alexander Buckley: Maintenance Review of the Java VM Specification The Java Virtual Machine Specification is the authoritative reference for the design of the Java virtual machine that underpins the Java SE platform. In an implementation-independent manner, the Specification describes the architecture, linking model, and instruction set of the Java virtual machine, (tags: oracle otn java virtualization jvm javase)

    Read the article

  • Why do I get this Debug Assertion Failed? Expression: list iterator not dereferenceable [migrated]

    - by Karel
    I'm trying this example in the (translated to dutch) book of Bjarne Stroustrup (C++): #include <vector> #include <list> #include "complex.h" complex ac[200]; std::vector<complex> vc; std::list<complex> l; template<class In, class Out> void Copy(In from, In too_far, Out to) { while(from != too_far) { *to = *from; ++to; ++from; } } void g(std::vector<complex>& vc , std::list<complex>& lc) { Copy(&ac[0], &ac[200], lc.begin()); // generates debug error Copy(lc.begin(), lc.end(), vc.begin()); // also generates debug error } void f() { ac[0] = complex(10,20); g(vc, l); } int main () { f(); } ** Compiling and Linking goes successful (0 errors/warnings)** But at runtime I get this error: Debug Assertion Failed! Program: path to exe file: \program files\ms vs studio 10.0\vc\include\list Line: 207 Expression: list iterator not dereferenceable For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. (Press retry to debug the application)

    Read the article

  • Struggling with the Single Responsibility Principle

    - by AngryBird
    Consider this example: I have a website. It allows users to make posts (can be anything) and add tags that describe the post. In the code, I have two classes that represent the post and tags. Lets call these classes Post and Tag. Post takes care of creating posts, deleting posts, updating posts, etc. Tag takes care of creating tags, deleting tags, updating tags, etc. There is one operation that is missing. The linking of tags to posts. I am struggling with who should do this operation. It could fit equally well in either class. On one hand, the Post class could have a function that takes a Tag as a parameter, and then stores it in a list of tags. On the other hand, the Tag class could have a function that takes a Post as a parameter and links the Tag to the Post. The above is just an example of my problem. I am actually running into this with multiple classes that are all similar. It could fit equally well in both. Short of actually putting the functionality in both classes, what conventions or design styles exist to help me solve this problem. I am assuming there has to be something short of just picking one? Maybe putting it in both classes is the correct answer?

    Read the article

  • BizTalk 2009 - Scoped Record Counting in Maps

    - by StuartBrierley
    Within BizTalk there is a functoid called Record Count that will return the number of instances of a repeated record or repeated element that occur in a message instance. The input to this functoid is the record or element to be counted. As an example take the following Source schema, where the Source message has a repeated record called Box and each Box has a repeated element called Item: An instance of this Source schema may look as follows; 2 box records - one with 2 items and one with only 1 item. Our destination schema has a number of elements and a repeated box record.  The top level elements contain totals for the number of boxes and the overall number of items.  Each box record contains a single element representing the number of items in that box. Using the Record Count functoid it is easy to map the top level elements, producing the expected totals of 2 boxes and 3 items: We now need to map the total number of items per box, but how will we do this?  We have already seen that the record count functoid returns the total number of instances for the entire message, and unfortunately it does not allow you to specify a scoping parameter.  In order to acheive Scoped Record Counting we will need to make use of a combination of functoids. As you can see above, by linking to a Logical Existence functoid from the record/element to be counted we can then feed the output into a Value Mapping functoid.  Set the other Value Mapping parameter to "1" and link the output to a Cumulative Sum functoid. Set the other Cumulative Sum functoid parameter to "1" to limit the scope of the Cumulative Sum. This gives us the expected results of Items per Box of 2 and 1 respectively. I ran into this issue with a larger schema on a more complex map, but the eventual solution is still the same.  Hopefully this simplified example will act as a good reminder to me and save someone out there a few minutes of brain scratching.

    Read the article

  • JPEG images not loading on PlayBook (Marmalade + iwgame)

    - by Vexille
    I'm using iwgame on a test project and I was trying to render different resolutions of JPG and PNG images. Everything works fine on the Marmalade Simulator, however once I deploy the game to our PlayBook and run it, only the PNG images are shown. I have declared the images in the MKB file and on a XML file iwgame's using to load the images. I've checked the deployments folder and all images are present in the intermediatefiles/native folder. We're currently using a BlackBerry only license, so we can only test this on the PlayBook, but we do intend to get a Community license and deploy to iOS and Android devices eventually (I'm not sure if this is a problem exclusive to the PlayBook). I really don't know if this is a Marmalade or a iwgame issue. I have a different test project without iwgame and it simply won't run with jpg images (I get the error: 'Could not find handler for extension "jpg"'). While searching for a sollution, I've seen people talking about using libjpg, but I've also found that Marmalade supposedly has integrated native jpeg support (and because of that iwgame has abandoned their jpeg loading support since v0.340), so I don't know what to think. I'm currently using the most recent versions of both Marmalade and iwgame, I believe: Marmalade 6.1.2 and iwgame 0.400. Also, please let me know if there's an easier or better way to do this, such as linking libjpg or something (I'm not exactly sure how to do this). I really would appreciate some help with this, there's a huge difference in size for the images we're planning to use, from a ~500kb jpg file to a ~3.5mb png file. Thanks, guys.

    Read the article

  • Class hierarchy problem in this social network model

    - by Gerenuk
    I'm trying to design a class system for a social network data model - basically a link/object system. Now I have roughly the following structure (simplified and only relevant methods shown) class Data: "used to handle the data with mongodb" "can link, unlink data and also return other linked data" "is basically a proxy object that only stores _id and accesses mongodb on requests" "it looks like {_id: ..., _out: [id1, id2,...], _inc: [id3, id4, ...]}" def get_node(self, id) "create a new Data object from the underlying mongodb" "each data object can potentially create a reference object to new mongo data" "this is needed when the data returns the linked objects" class Node: """ this class proxies linking calls to .data it includes additional network logic operations whereas Data only contains a basic database solution """ def __init__(self, data): "the infrastructure realization is stored as composition by an included object data" "Node bascially proxies most calls to the infrastructure object data" def get_node(self, data): "creates a new object of class Object or Link depending on data" class Object(Node): "can have multiple connections to Link" class Link(Node): "has one 'in' and one 'out' connection to an Object" This system is working, however maybe wouldn't work outside Python. Note that after reading links Now I have two questions here: 1) I want to infrastructure of the data storage to be replacable. Earlier I had Data as a superclass of Node so that it provided the neccessary calls. But (without dirty Python tricks) you cannot replace the superclass dynamically. Is using composition therefore recommended? The drawback is that I have to proxy most calls (link, unlink etc). Any thoughts? 2) The class Node contains the common method .get_node which is used to built new Object or Link instances after reading out the data. Some attribute of data decided whether the object which is only stored by id should be instantiated as an Object or Link class. The problem here is that Node needs to know about Object and Link in advance, which seems dodgy. Do you see a different solution? Both Object and Link need to instantiate one of all possible types depending on what the find in their linked data. Are there any other ideas how to implement a flexible Object/Link structure where the underlying database storage is isolated?

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • Oracle Unveils AutoVue Release 20.1

    - by prasenjit.niyogi(at)oracle.com
    We are extremely pleased to announce the availability of Oracle's AutoVue Release 20.1. AutoVue 20.1 is the latest major release of the family of Enterprise Visualization solutions from Oracle. Highlights of the release include: Unparalleled new format support and enhancements for 3D CAD, 2D, CAD, ECAD and PDF documents New capabilities that support end-to-end design to manufacture processes in the Electronics & High Tech space, that allow manufacturing engineers to perform accurate manufacturability reviews through better support for variants, overlays and polarity Significant printing enhancements, such as printing of markup notes; support for Excel file print settings; and print in grayscale; which serve to optimize paper-based business processes Powerful integration enablement capabilities to extend visualization into existing enterprise architectures and systems; including AutoVue Hotspots that enable visual navigation and action by linking visual data to structured enterprise data, and new AutoVue Document Print Services (DPS) to enrich enterprise applications with format and platform agnostic printing of any document type Improvements for cost-effective AutoVue deployment and administration, including support for virtualization Release 20.1 Webcast - Attend the webcast on April 13th at 12:00 pm EST to discover what is new and exciting in the latest release. Encourage your customers, prospects, and partners to attend. Title: Oracle Unveils AutoVue Release 20.1 Channel: Oracle AutoVue Channel Register Here: http://www.brighttalk.com/webcast/26282 To discover more about the latest release, and to find out what the customers and partners are saying about the value of this offering, check out the: What's New is AutoVue 20.1 Datasheet You can also learn all about the latest format support here AutoVue 20.1 Format Support Sheet We look forward to seeing you at the webcast. If you have any questions feel free to ask, and we will answer it in this forum. Enjoy AutoVue 20.1!

    Read the article

  • Get external metadata with streamripper using python script

    - by user72379
    Hi I like using streamripper to rip music from the web. I have a favorite radio station that doesn't have the metadata for the songs, so I have to screen scrape it from its website manually. I created this neat python script in the format that the docs suggest, and linked the address in the GUI for streamriper. But it still doesn't work, any one know how to make it work..? I know it used to work. It gives you a sample here: http://streamripper.sourceforge.net/history.php import http import time import re u = 'SEE IMAGE FOR THIS URL' s = http.Session(0, 0) s.add_headers(h, persistent=1) while 1: c = unicode(s.get(u)) pat = r'class="title"([^<]+)([^<]+)' m = re.search(pat, c) title = m.group(1) artist = m.group(2) print 'TITLE='+title+'\n'+'ARTIST='+artist+'\n.\n' time.sleep(30) [img]http://s11.postimage.org/sok928lsz/urlstream.png[/img] I put the address to the script here: [img]http://s17.postimage.org/4bhmhi4yn/streamripper.png[/img] I've tried putting it in the root of the streamripper application and doing this: lax.py I've even compiled it to a EXE, and tried linking to that.. nothing What am I doing wrong?

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • How do you structure your shared code so that it is "re-findable" for new developers?

    - by awmckinley
    I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code-reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes "reinventing the wheel" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, and sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >