Search Results

Search found 31630 results on 1266 pages for 'content management'.

Page 382/1266 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Pretty URL in ADF Faces of JDeveloper 11.1.2.2

    - by Frank Nimphius
    Many features planned for Oracle JDeveloper 12c find their way into current releases of Oracle JDeveloper 11g R1 and JDeveloper 11g R2. One example of such a feature is "pretty URL" - or "clean URL" as the Oracle JDeveloper 11g R2 (11.1.2.2) documentation puts it. "A.2.3.24 Clean URLs Historically, ADF Faces has used URL parameters to hold information, such as window IDs and state. However, URL parameters can prevent search engines from recognizing when URLs are actually the same, and therefore interfere with analytics. URL parameters can also interfere with bookmarking. By default, ADF Faces removes URL parameters using the HTML5 History Management API. If that API is unavailable, then session cookies are used.You can also manually configure how URL parameters are removed using the context parameter oracle.adf.view.rich.prettyURL.OPTIONS. Set the parameter to off so that no parameters are removed. Set the parameter to useHistoryApi to only use the HTML5 History Management API. If a browser does not support this API, then no parameters will be removed. Set the parameter to useCookies to use session cookies to remove parameters. If the browser does not support cookies, then no parameters will be removed." See: http://docs.oracle.com/cd/E26098_01/web.1112/e16181/ap_config.htm#ADFUI12856 So basically, what this part in the documentation says is: In JDeveloper 11g R2 (11.1.2.2), Oracle ADF Faces automatically removes its internally used dynamic parameters from the URL You can influence the setting with the prettyURL.OPTIONS context option, which however is not recommended you to do because the default behavior is able to detect if the browser client supports HTML 5 History management or not. In the latter case it the uses a session cookie and if this doesn't work, falls back to the "old" URL parameter adding. The information that is not so explicit and clearly mentioned in the documentation is that this is only for ADF Faces parameters (such as _afrLoop, Adf-Window-Id, etc.), but not the ADF controller token (_adf.ctrl-state)! Removing the ADF controller token is an enhancement request that will be implemented in Oracle JDeveloper 12c

    Read the article

  • Run a .sql script file in C#

    - by SAMIR BHOGAYTA
    using System.Data.SqlClient; using System.IO; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string sqlConnectionString = "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True"; FileInfo file = new FileInfo("C:\\myscript.sql"); string script = file.OpenText().ReadToEnd(); SqlConnection conn = new SqlConnection(sqlConnectionString); Server server = new Server(new ServerConnection(conn)); server.ConnectionContext.ExecuteNonQuery(script); } } }

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Powershell – script all objects on all databases to files

    - by Nigel Rivett
    <# This simple PowerShell routine scripts out all the user-defined functions, stored procedures, tables and views in all the databases on the server that you specify, to the path that you specify. SMO must be installed on the machine (it happens if SSMS is installed) To run - set the servername and path Open a command window and run powershell Copy the below into the window and press enter - it should run It will create the subfolders for the databases and objects if necessary. #> $path = “C:\Test\Script\" $ServerName = "MyServerNameOrIpAddress" [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName $IncludeTypes = @(“tables”,”StoredProcedures”,"Views","UserDefinedFunctions") $ExcludeSchemas = @(“sys”,”Information_Schema”) $so = new-object (‘Microsoft.SqlServer.Management.Smo.ScriptingOptions’) $so.IncludeIfNotExists = 0 $so.SchemaQualify = 1 $so.AllowSystemObjects = 0 $so.ScriptDrops = 0 #Script Drop Objects $dbs=$serverInstance.Databases foreach ($db in $dbs) { $dbname = "$db".replace("[","").replace("]","") $dbpath = "$path"+"$dbname" + "\" if ( !(Test-Path $dbpath)) {$null=new-item -type directory -name "$dbname"-path "$path"} foreach ($Type in $IncludeTypes) { $objpath = "$dbpath" + "$Type" + "\" if ( !(Test-Path $objpath)) {$null=new-item -type directory -name "$Type"-path "$dbpath"} foreach ($objs in $db.$Type) { If ($ExcludeSchemas -notcontains $objs.Schema ) { $ObjName = "$objs".replace("[","").replace("]","") $OutFile = "$objpath" + "$ObjName" + ".sql" $objs.Script($so)+"GO" | out-File $OutFile #-Append } } } }

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point. To search in only the answers from this question, use the inquestion:this option.

    Read the article

  • Extending the Value of Your Oracle Financials Applications Investment with Document Capture, Imaging and Workflow

    Learn how Oracles end-to-end document imaging system extends the value and increases the automation of your Oracle Financials applications by using intelligent capture and imaging technologies to streamline high volume operations like accounts payable. Oracle Imaging and Process Management 11g (Oracle I/PM 11g) offers an integrated system that digitizes paper invoices, intelligently extracts header information and line item details, initiates automated workflows, and enables in-context access to imaged invoices directly from Oracle Applications, including Oracle E-Business Suite Financials and PeopleSoft Enterprise Financial Management. Come hear more about these certfied, standards-based application integrations as well as how document imaging can help your organization achieve quick, measurable ROI, by increasing efficiencies across financial departments, and reducing costs related to paper storage and handling.

    Read the article

  • New White Papers Available

    - by mattande
    New Master Data Services white papers are now available on MSDN. For an application-agnostic overview of Master Data Management, see Organizational Approaches to Master Data Management . For the steps needed to configure Master Data Services to work with a SharePoint workflow, see SharePoint Workflow Integration with Master Data Services . Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • MAAS and Openstack Network

    - by user281985
    Trying to figure out the best way for openstack and MASS networks to co-exist. Assuming MAAS used 10.0.0.0/24 for provisioning of various openstack nodes, once openstack, with quantum, is deployed should 10.0.0.0 be used as management network, external network, both or discarded? Reason for the question being that I ran a deployment where openstack used 10.0.0.0 as its management network, MAAS dhcp and dns were active; however, I could only access VMs through namespace and was not able to utilize MAAS's dhcp to assign floating IP. (VMs network was through an internal bridge 10.10.10.10, and external bridge was using the same interface as MAAS.) Any thoughts or ideas are appreciated.

    Read the article

  • Introduction to Oracle's Primavera P6 Analytics

    Primavera P6 Analytics is a new product offering from Oracle which utilizes the Oracle BI product, Oracle Business Intelligence Suite Enterprise Edition Plus (Oracle BI EE Plus), to provide root-cause analysis, manage by exception and early-warning problem indicators from your P6 Enterprise Project Portfolio Management projects and portfolios. Out of the box reports and dashboards provide drill-down and drill-through insight to action where you can jump directly into the problem areas in Primavera P6 Enterprise Project Portfolio Management to course-correct or implement best practices based on successful project implementations. A complete view into all of your projects histories provides the trends and analysis needed to identify the best course of action to take next.

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • Total Cloud Control for Systems - Webcast on April 12, 2012 (18:00 CET/5pm UK)

    - by Javier Puerta
    Total Cloud Control Keeps Getting BetterJoin Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executives to find out how your enterprise cloud can achieve 10x improved performance and 12x operational agility. Only Oracle Enterprise Manager Ops Center 12c allows you to: Accelerate mission-critical cloud deployment Unleash the power of Solaris 11, the first cloud OS Simplify Oracle engineered systems management You’ll also get a chance to have your questions answered by Oracle product experts and dive deeper into the technology by viewing our demos that trace the steps companies like yours take as they transition to a private cloud environment. Register today for this interactive keynote and panel discussion. Agenda 18:00 a.m. CET (5pm UK) Keynote: Total Cloud Control for Systems 18:45 a.m. CET (5:45 pm UK) Panel Discussion with Oracle Hardware, Software, and Support Executives 19:15 a.m. CET (6:15 UK) Demo Series: A Step-by-Step Journey to Enterprise Clouds

    Read the article

  • Rsync plugin to many local wordpress installs via script or cli

    - by Nick Abbey
    I am maintaining a large number of wordpress installs on a production server, and we are looking to deploy InfiniteWP for managing these installs. I am looking for a way to script the distribution of the plugin folder to all of these installs. On server wp-prod, all sites are stored in /srv//site/ The plugin needs to be copied from ~/iws-plugin to /srv//site/wp-content/plugins/ Here's some pseudo code to explain what I need to do: array dirs = <all folders in /srv> for each d in dirs if exits "/srv/d/site/wp-content/plugins" rsync -avzh --log-file=~/d.log ~/plugin_base_folder /srv/d/site/wp-content/plugins/ else touch d.log echo 'plugin folder for "d" not found' >> ~/d.log end end I just don't know how to make it happen from the cli or via bash. I can (and will) tinker with a bash or ruby script on my test server, but I'm thinking the command-line-fu here on SF is strong enough to handle this issue much more quickly than I can hack together a solution. Thanks!

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • Enterprise Trade Compliance: Changing Trade Operations around the World

    - by John Murphy
    We live in a world of incredible bounty and speed where any product can be delivered anywhere on earth. However, our world is also filled with challenges for business – where volatility, uncertainty, risk, and chaos are our daily companions. To prosper amid the realities of this new world, organizations cannot rely on old strategies; they need new business models. Key trends within the global economy are mandating that companies fully integrate global trade management best practices within broader supply chain management strategies, rather than simply leaving it as a discrete event at the end of the order or procurement cycle. To explain, many companies face a complicated and changing compliance environment. This is directly linked to the speed and configuration of the supply chain, particularly with the explosion of new markets, shorter service cycles and ship times, accelerating rates of globalization and outsourcing, and increasing product complexity and regulation. Read More...

    Read the article

  • How to analyze data

    - by Subhash Dike
    We are working on an application that allows user to search/read some content in a particular domain. We wanted to add some capability in the app which can suggest user some content based on the usage pattern (analyze data based on frequency and relevance). Currently every time user search or read something we do store that information in backend database. We would like to use this data to present some additional content to user. Could someone explain what kind of tools will be required for such a job and any example? And what this concept is called, data analysis? data mining? business intelligence? or something else? Update: Sorry for being too broad, here is an example SQL Database (Just to give an idea, actual db is little different with normalization and stuff) Table: UserArticles Fields: UserName | ArticleId | ArticleTitle | DateVisited | ArticleCategory Table: CategoryArticles Fields: Category | Article Title | Author etc. One Category may have one more articles. One user may have read the same article multiple times (in this case we place additional entry in the user article table. Task: Use the information availabel in UserArticle table and rank categories in order which would be presented to user automatically in other part of application. Factors to be considered are frequency and recency. This might be possible through simple queries or may require specialized tools. Either way, the task is what mention above. I am not too sure which route to take, hence the question. Thoughts??

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • Best practices for App Idea ownership and shares

    - by JOG
    I am developing apps on my sparetime. I am the sole developer, and two non-programmer friends of mine provide vision, content, algorithms and ideas. We always agree happily on all the features, todos and prioritizations. But naturally, coding it is the biggest part. When selling, we agree on splitting profit equally, that is 33% each. But version 1.0 naturally does not sell much. And I go on to try to make the app more viral. This includes tons of stuff where the others are of little help. Examples: Adding support for sharing, facebook connect, gameifying, letting users add content, home page, support, maintenance, server services to make it easier for to update content. The list is long. Suddenly I will be doing 100% of a lot of work but only "own" a third of the income. My friends may either "fade out" of the project after 1.0, or they continue to contribute, but with less value and I would rather exchange them for more programmers or graphic designers. The effort they made to version 1.0 is worth a lot to the app and I realize I would have never done it without them. But I am doing all the work in the end. It is hard to negotiate about splitting 90, 5, 5 instead of 33% each, because the idea is still theirs. How to solve this? What are the best practices to regard the ownership of the app? What kind of agreements could I make that make it beneficial and motivational for me to continue developing the app?

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >