Search Results

Search found 44258 results on 1771 pages for 'disable add ons'.

Page 628/1771 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • When Is It Acceptable to NOT Fix Broken Windows?

    - by Bullines
    In reference to broken windows... Are there times when refactoring is best left for a future activity? For example, if a project to add some new features to an existing internal system is assigned to a team that has not worked with the system until now, and is given a short timeline in which to work with - can it be ever be justifiable to defer major refactorings to existing code for the sake of making the deadline in this scenario?

    Read the article

  • Launchpad ppa supporting multiple versions of Ubuntu

    - by unknownone
    Is it possible for a launchpad ppa to support multiple Ubuntu versions such as 10.04 and 12.04 when the package itself was built on a 12.04 machine? When trying to add the ppa to a older machine, it gives an error saying it was made on a 12.04 system and that it could not install. I'd like sudo get-apt-install my-app to work with both 10.04 and 12.04, and I am new to packaging and ppa's so I do not know if anything like this exists. Any help would be appreciated, thanks!

    Read the article

  • SOA Community Newsletter June 2012

    - by JuergenKress
    Dear SOA partner community member Happy New fiscal Year FY13 - thanks for the FY12 middleware business! Our SOA & BPM Partner Community continued to grow to almost 4000 members. Additional we launched the WebLogic Partner Community which grew very fast to 800+ members! To continue our joint successful business in the new fiscal year our Top priorities FY13 are: Become trained:the next opportunity are the summer camps in Lisbon & Munich or our on-demand training SOA & BPM and see our detailed training calendar below. Run your marketing & sales campaign: sales kits, marketing kits, solution catalog add your services to oracle.com, add your events to oracle.com and advertisement Get recognized: OFM awards, partner excellence awards & references & plaques Become Specialized: All of the above makes the Oracle Specialization! Make sure you get your Specialization benefits! Topics: Key product focus areas will be: SOA as the foundation for clouds, integration platform 2.0 for industrial SOA including BAM & CEP, BPM & adaptive case management & migrate legacy solutions to the strategic offerings. The new Oracle VM VirtualBox image is available to test SOA Suite and BPM Suite. To start your BPM 11g project a new BPM Standard Edition a license entry version is available. EAIESB published a post with all BPMN2.0 notations. If you want to learn more please visit the Oracle Learning Library. We want to promote your SOA 11g & BPM 11g success let us know where you are in production! And nominate this success for our Middleware Oracle Excellence Awards 2012. Douwe P. van den Bos published at his blog a SOA governance series: Principles of Service-Oriented Architecture & The Maturity of a Service-Oriented Architecture & SOA Maturity Models. Please let us know if you published interesting papers! Would be great to see you at the SOA, Cloud + Service Technology Symposium by Thomas Erl. Please feel free to get your conference pass with the oracle discount code “DJMXZ370”. See you in Lisbon & London at our summer camps! Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsJune2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress,SOA Demo System,BPM

    Read the article

  • Grep through subdirectories

    - by Kathryn
    Add a string to a text file from terminal I've been looking at this thread. The solution (number 2, with ls | grep) works perfectly for files called .txt in the current directory. How about if I wanted to search through a directory and the subdirectories therein? For example, I have to search through a directory that has many subdirectories, and they have many subdirectories etc. I'm new to Linux sorry, so I'm not sure if this is the right place

    Read the article

  • Why Does Adding a UDF or Code Truncates the # of Resources in List?

    - by Jeffrey McDaniel
    Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension.  (Query from nqquery log) select distinct 0 as c1,      D1.c1 as c2,      D1.c2 as c3,      D1.c3 as c4 from       (select distinct T10745.CURRENT_FLAG as c1,                T10745.RESOURCE_ID as c2,                T10745.RESOURCE_NAME as c3           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */            where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )       ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas. One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.   Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis. select distinct 0 as c1,      D1.c2 as c2,      D1.c3 as c3,      D1.c4 as c4,      D1.c5 as c5,      D1.c1 as c6 from       (select sum(T10754.AT_COMPLETION_UNITS) as c1,                T10706.CODE_VALUE_02 as c2,                T10745.CURRENT_FLAG as c3,                T10745.RESOURCE_ID as c4,                T10745.RESOURCE_NAME as c5           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,                W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,                W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */            where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )            group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG      ) D1 order by c4, c5, c3, c2 When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

    Read the article

  • dbus signal for volume up & down

    - by jldupont
    I recently upgrade to Ubuntu 10.10 Maverick Meerkat and to my dismay, the mediakeys "volume up" and "volume down" do not send Dbus signals anymore... how can I add these back? Thanks!! Update: it seems that under some circumstances (which I don't know exactly yet), the DBus signals start working again. It is as though when a certain application (TBD) is executed, the dbus signals are re-activated.

    Read the article

  • How to update off screen bitmap in a surfaceview thread

    - by DKDiveDude
    I have a Surfaceview thread and an off canvas texture bitmap that is being generated (changed), first row (line), every frame and then copied one position (line) down on regular surfaceview bitmap to make a scrolling effect, and I then continue to draw other things on top of that. Well that is what I really want, however I can't get it to work even though I am creating a separate canvas for off screen bitmap. It is just not scrolling at all. I other words I have a memory bitmap, same size as Surfaceview canvas, which I need to scroll (shift) down one line every frame, and then replace top line with new random texture, and then draw that on regular Surfaceview canvas. Here is what I thought would work; My surfaceChanged where I specify bitmap and canvasses and start thread: @Override public void surfaceCreated(SurfaceHolder holder) { intSurfaceWidth = mSurfaceView.getWidth(); intSurfaceHeight = mSurfaceView.getHeight(); memBitmap = Bitmap.createBitmap(intSurfaceWidth, intSurfaceHeight, Bitmap.Config.ARGB_8888); memCanvas = new Canvas(memCanvas); myThread = new MyThread(holder, this); myThread.setRunning(true); blnPause = false; myThread.start(); } My thread, only showing essential middle running part: @Override public void run() { while (running) { c = null; try { // Lock canvas for drawing c = myHolder.lockCanvas(null); synchronized (mSurfaceHolder) { // First draw off screen bitmap to off screen canvas one line down memCanvas.drawBitmap(memBitmap, 0, 1, null); // Create random one line(row) texture bitmap memTexture = Bitmap.createBitmap(imgTexture, 0, rnd.nextInt(intTextureImageHeight), intSurfaceWidth, 1); // Now add this texture bitmap to top of off screen canvas and hopefully bitmap memCanvas.drawBitmap(textureBitmap, intSurfaceWidth, 0, null); // Draw above updated off screen bitmap to regular canvas, at least I thought it would update (save changes) shifting down and add the texture line to off screen bitmap the off screen canvas was pointing to. c.drawBitmap(memBitmap, 0, 0, null); // Other drawing to canvas comes here } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { myHolder.unlockCanvasAndPost(c); } } } } For my game Tunnel Run. Right now I have a working solution where I instead have an array of bitmaps, size of surface height, that I populate with my random texture and then shift down in a loop for each frame. I get 50 frames per second, but I think I can do better by instead scrolling bitmap.

    Read the article

  • Why do camera's aspect ratio look good on computer but not on Android devices?

    - by Pooya Fayyaz
    I'm developing a game for Android devices and I have a script that solves the aspect-ratio problem for computer screens but not for my intended target platform. It looks perfect on computer, even when re-sizing the game screen, but not when running my game in landscape mode on mobile phones. This is my script using UnityEngine; using System.Collections; using System.Collections.Generic; public class reso : MonoBehaviour { void Update() { // set the desired aspect ratio (the values in this example are // hard-coded for 16:9, but you could make them into public // variables instead so you can set them at design time) float targetaspect = 16.0f / 9.0f; // determine the game window's current aspect ratio float windowaspect = (float)Screen.width / (float)Screen.height; // current viewport height should be scaled by this amount float scaleheight = windowaspect / targetaspect; // obtain camera component so we can modify its viewport Camera camera = GetComponent<Camera>(); // if scaled height is less than current height, add letterbox if (scaleheight < 1.0f && Screen.width <= 490 ) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } else // add pillarbox { float scalewidth = 1.0f / scaleheight; Rect rect = camera.rect; rect.width = scalewidth; rect.height = 1.0f; rect.x = (1.0f - scalewidth) / 2.0f; rect.y = 0; camera.rect = rect; } } } I figured that my problem occurs in this part of the script: if (scaleheight < 1.0f) { Rect rect = camera.rect; rect.width = 1.0f; rect.height = scaleheight; rect.x = 0; rect.y = (1.0f - scaleheight) / 2.0f; camera.rect = rect; } Its look like this on my mobile phone (portrait): and on landscape mode:

    Read the article

  • Why profile applications using AOP?

    - by Vance
    When tuning performance in a web application, I am looking for good and light-weight performance profiling tools to measure the execution time for each method. I know that the easiest profiling method is to log the start time and end time for each method, but I see more and more people using AOP to profile (add @profiled before each method). What's the benefit of AOP profiling compared to the common "log" way? Thanks in advance Vance

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Using Network load balancing to distribute load for SharePoint2010 – Part3 of building my own development SharePoint2010 Farm

    - by ybbest
    Part1 of building my own development SharePoint2010 Farm Part2 of building my own development SharePoint2010 Farm Part3 of building my own development SharePoint2010 Farm In my last post, I have installed SharePoint2010 in one of the server (WFE One) and configured using the OOB SharePoint configuration wizard. In this post I will show you how to use OOB windows network load balancing to distribute load for SharePoint2010 site. 1. Install SharePoint in another server WFE Two (you can follow the steps in my last post), but instead of choosing create new Farm, you need to select “connect to existing farm” this time. 2. Click next then click retrieve database names button and select the farm configuration database. 3. Click next and enter the passphrase you specified when you first installed the SharePoint Farm. 4. Click the advanced settings and select Use this machine to host the web site. 5. Click OK to finish the configurations 6. Next, Install NLB in the two WFE (web front end) SharePoint servers 7. Configure NLB to create the cluster. Go to Start—Administrative Tools—Network Load Balancing Manager 8. Right-click the Network Load Balancing Clusters Node and select New Cluster. 9. Type in the host name that is to be part of the new cluster. 10. Type in the IP address for the cluster. 11. Select the Multicast for this cluster.(The default one is Unicast) 12. You can configure the Port Rules for the clustering , but I will leave the default here. 13. Add another WEF to the cluster. 14. Type in the host name that is to be part of the new cluster. 15. Set the Priority to 2. 16. Click Next to complete the cluster setup. 17. Create an entry in the DNS for the new cluster. 18. Add the binding to the IIS site in the IIS Manager 19. Change the Alternate access mapping for you default site collection from http://sp2010wefone to http://team 20. Browse to http://Team , you will be redirected to the SharePoint site.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • How to discriminate from two nodes with identical frequencies in a Huffman's tree?

    - by Omega
    Still on my quest to compress/decompress files with a Java implementation of Huffman's coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment. From the Wikipedia page, I quote: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. Now, emphasis: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. So I have to take two nodes with the lowest frequency. What if there are multiple nodes with the same low frequency? How do I discriminate which one to use? The reason I ask this is because Wikipedia has this image: And I wanted to see if my Huffman's tree was the same. I created a file with the following content: aaaaeeee nnttmmiihhssfffouxprl And this was the result: Doesn't look so bad. But there clearly are some differences when multiple nodes have the same frequency. My questions are the following: What is Wikipedia's image doing to discriminate the nodes with the same frequency? Is my tree wrong? (Is Wikipedia's image method the one and only answer?) I guess there is one specific and strict way to do this, because for our school assignment, files that have been compressed by my program should be able to be decompressed by other classmate's programs - so there must be a "standard" or "unique" way to do it. But I'm a bit lost with that. My code is rather straightforward. It literally just follows Wikipedia's listed steps. The way my code extracts the two nodes with the lowest frequency from the queue is to iterate all nodes and if the current node has a lower frequency than any of the two "smallest" known nodes so far, then it replaces the highest one. Just like that.

    Read the article

  • Convert VARCHAR() columns to NVARCHAR()

    - by ChrisD
    We recently underwent an upgrade that required us to change our database columns from varchar to NVarchar, to support unicode characters. Digging through the internet, I found a base script which I modified to handle reserved word table names, and maintain the NULL/NotNull constraint of the columns.   I Ran this script use NWOperationalContent – Your Catalog Name here GO SELECT 'ALTER TABLE ' + isnull(schema_name(syo.id), 'dbo') + '.[' +  syo.name +'] '     + ' ALTER COLUMN [' + syc.name + '] NVARCHAR(' + case syc.length when -1 then 'MAX'         ELSE convert(nvarchar(10),syc.length) end + ') '+         case  syc.isnullable when 1 then ' NULL' ELSE ' NOT NULL' END +';'    FROM sysobjects syo    JOIN syscolumns syc ON      syc.id = syo.id    JOIN systypes syt ON      syt.xtype = syc.xtype    WHERE      syt.name = 'varchar'     and syo.xtype='U'   which produced a series of ALTER statements which I could then execute the tables.  In some cases I had to drop indexes, alter the tables, and re-create the indexes.  There might have been a better way to do that, but manually dropping them got the job done.   use NWMerchandisingContent GO ALTER TABLE Locale Drop Constraint PK_Locale ALTER TABLE Country DROP CONSTRAINT PK_Country GO ALTER TABLE dbo.[Campaign]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [UnitOfmeasure] NVARCHAR(200)  NULL; ALTER TABLE dbo.[BundleLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Imperative] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [Instructions] NVARCHAR(MAX)  NULL; ALTER TABLE dbo.[BundleComponentLocalization]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[BundleComponent]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Bundle]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Banner]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [Link] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[Video]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [VideoLink] NVARCHAR(512)  NOT NULL; ALTER TABLE dbo.[ProductUsage]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[Thumbnail]  ALTER COLUMN [ActorKey] NVARCHAR(200)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [Locale] NVARCHAR(8)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [UnitOfMeasure] NVARCHAR(150)  NOT NULL; ALTER TABLE dbo.[SkuLocalization]  ALTER COLUMN [SwatchColor] NVARCHAR(50)  NOT NULL; etc.. GO ALTER TABLE Locale ADD CONSTRAINT PK_Locale PRIMARY KEY (LocaleId) ALTER TABLE Country ADD CONSTRAINT PK_Country PRIMARY KEY (CountryId) Note that this alter is non-destructive to the data.   Hope this helps.

    Read the article

  • How to prevent Google from indexing non-domain URL of website?

    - by Gavin
    My webhost gives you two URLs for your website: the URL on your shared server, which is something like usr283725992783.webhost.com and your domain URL, which is www.example.com Google is indexing both of these URLs, but obviously I only want www.example.com to be indexed. I can't add "nofollow" tags to usr283725992783.webhost.com because that URL serves the same files as www.example.com. How can I only make Google not follow usr283725992783.webhost.com and keep following www.example.com?

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • SEO for replacing blog content, but keeping the same page URL

    - by cphill
    This might not have any major impact on the SEO, but basically I have random blog at this URL: http://example.com/blog (not a real URL), that I am removing and replacing with a company blog. I want to use the http://example.com/blog URL address, but I'm not sure how this would effect my SEO since this random blog content that I am removing has the example.com/blog URL prefix. Would I just add a 310 redirect for those old blog articles and leave the basic /blog URL without any redirects?

    Read the article

  • can't update nvida having error near the end of the install

    - by user94843
    I had just got Ubuntu (first timer to Ubuntu so be very descriptive). I think there a problem with my Nvida update it won't let me update it. This is the name of the update in update manager NVIDIA binary xorg driver, kernel module and VDPAU library. When i attempt to install it, it starts out fine but near the end i get a window titaled package operation failed with these under the details installArchives() failed: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8 Errors were encountered while processing: nvidia-current Error in function: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8

    Read the article

  • How is programming affected by spatial aptitude?

    - by natli
    The longer I work on a project, the less clear it becomes. It's like I cannot seperate various classes/objects anymore in my head. Everything starts mixing up, and it's extremely hard to take it all apart again. I start putting functions in classes where they really don't belong, and make silly mistakes such as writing code that I later find was 100% obsolete; things are no longer clearly mappable in my head. It isn't until I take a step back for several hours (or days somtimes!) that I can actually see what's going on again, and be productive. I usually try to fight through this, I am so passionate about coding that I wouldn't for the life of me know what else I could be doing. This is when stuff can get really weird, I get so up in my head that I sort of lose touch with reality (to some extent) in that various actions, such as pouring a glass of water, no longer happen on a concious level. It happens on auto pilot, during which pretty much all of my concious concentration (is that even a thing?) is devoted to borderline pointless problem solving (trying to seperate elements of code). It feels like a losing battle. So I took an IQ test a while ago (Wechsler Adult Intelligence Scale I believe it was) and it turned out my Spatial Aptitude was quite low. I still got a decent score, just above average, so I won't have to poke things with a stick for a living, but I am a little worried that this is such a handicap when writing/engineering computer programs that I won't ever be able to do it seriously or professionally. I am very much interested in what other people think of this.. could a low spatial aptitude be the cause of the above described problems? Maybe I should be looking more along the lines of ADD or something similar, because I did get diagnosed with ADD at the age of 17 (5 years ago) but the medicine I received didn't seem to affect me that much so I never took it all that serious. Sorry if I got a little off topic there, I know this is not a mental help board, the question should be clear; How is programming affected by spatial aptitude? As far as I know people are born with low/med/high spatial aptitude, so I think it's interesting to find out if the more fortunate are better programmers by birth right.

    Read the article

  • ADF How-To #4: Adding a View Criteria and a Search Panel

    - by Vik Kumar
    In this week's How-To we are explaining how to add a view criteria to VO and then use it to create a Search Panel via customization. The detailed steps can be found here . We have also prepared a video walking you through the steps, available via our Youtube Channel. For any questions or comments, please use the comments section below or visit our OTN forum. We are always looking for topic suggestions for additional How-Tos.

    Read the article

  • Design for an interface implementation that provides additional functionality

    - by Limbo Exile
    There is a design problem that I came upon while implementing an interface: Let's say there is a Device interface that promises to provide functionalities PerformA() and GetB(). This interface will be implemented for multiple models of a device. What happens if one model has an additional functionality CheckC() which doesn't have equivalents in other implementations? I came up with different solutions, none of which seems to comply with interface design guidelines: To add CheckC() method to the interface and leave one of its implementations empty: interface ISomeDevice { void PerformA(); int GetB(); bool CheckC(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { return true; // without checking anything } } This solution seems incorrect as a class implements an interface without truly implementing all the demanded methods. To leave out CheckC() method from the interface and to use explicit cast in order to call it: interface ISomeDevice { void PerformA(); int GetB(); } class DeviceModel1 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } public bool CheckC() { bool res; // assign res a value based on some validation return res; } } class DeviceModel2 : ISomeDevice { public void PerformA() { // do stuff } public int GetB() { return 1; } } class DeviceManager { private ISomeDevice myDevice; public void ManageDevice(bool newDeviceModel) { myDevice = (newDeviceModel) ? new DeviceModel1() : new DeviceModel2(); myDevice.PerformA(); int b = myDevice.GetB(); if (newDeviceModel) { DeviceModel1 newDevice = myDevice as DeviceModel1; bool c = newDevice.CheckC(); } } } This solution seems to make the interface inconsistent. For the device that supports CheckC(): to add the logic of CheckC() into the logic of another method that is present in the interface. This solution is not always possible. So, what is the correct design to be used in such cases? Maybe creating an interface should be abandoned altogether in favor of another design?

    Read the article

  • Link tags in iframe widget

    - by john Smith
    I have a rating community-site and I´m offering little iframe widgets with the average rating and some little other info. Does it make sense (for visibility, SEO) to add link tags to the head like: <link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="rssfeed" /> <link rel="index" title="main-profile" href="main-profile"> To get a logical association of the widget to relating pages? How would you do this?

    Read the article

  • Install Dropbox in Xubuntu 11.10

    - by user34648
    I figured there might be a problem installing Dropbox using the .deb file from the website since XFCE doesn't use Nautilus. Some tutorials said that you have to install Nautilus first which I did. But when I installed Dropbox there weren't any problems and it even shows a symbol in the tray without me having to add anything. What I want to know is if installing Nautilus was necessary or not and which file manager I'm using now, Thunar or Nautilus?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >