Search Results

Search found 17249 results on 690 pages for 'resource management'.

Page 165/690 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • question related to Iphone autorelease usage

    - by user524331
    Could someone help me please understand how allocation and memory management is done and handled in following scenario. i am giving a Psuedo code example and question thats troubling me is inline below: interface first { NSDecimalNumber *number1; } implementation ..... -(void) dealloc { [number1 release]; [super dealloc]; } ================================= interface second { NSDecimalNumber *number2; } implementation second ..... - (First*) check { First *firstObject = [[[First alloc] init] autorelease]; number1 = [[NSDecimalNumber alloc] initWithInteger:0]; **// do i need to autorelease number1 as well?** return firstObject; }

    Read the article

  • Android Static Variable Scope and Lifetime

    - by Edison
    I have an application that has a Service uses a ArrayList to store in the background for a very long time, the variable is initialized when the service started. The service is in the background and there will be frequent access to the variable (that's why i don't want to use file management or settings since it will be very expensive for a file I/O for the sake of battery life). The variable will likely to be ~1MB-2MB over its life tie. Is it safe to say that it will never be nulled by GC or the system or is there any way to prevent it? Thanks.

    Read the article

  • See queries that hit SQL

    - by Shaded
    I have a really basic stupid easy question about sql... and I'll probably get -100 points... but here it goes anyway... Is there a way using sql 2008 Management Studio to look at the queries that hit the server? I'm trying to debug a program and I get messages like "Incorrect syntax near the keyword 'AND'". Since the queries are being dynamically generated it's a hassle to figure out what is going to the server. Any help is appreciated!

    Read the article

  • Why does Resource Monitor in windows 7 show half my memory as "Hardware Reserved"?

    - by Brandon
    Does anyone know why in Windows 7 Resource Monitor shows that I have 8 GB of RAM installed on my computer, but I only have 3.2 GB available as 4.8 GB are in "Hardware Reserve". I researched this issue and tried going into msconfig and making sure that in the boot options the number of processors and max memory options were turned off. I also opened the computer up and reseated each of the memory sticks while clearing out any dust that was in there. Some info on the system I am using: OS: Windows 7 Enterprise 64-bit Edition CPU: AMD Phenom X4 9750 Memory: 4 X 2GB DDR2 memory Motherboard: MS-7548 (Aspen) Any help would be much appreciated.

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • How do I keep from running out of memory on graphics for an Android app?

    - by user279112
    I've been working on an Android app in Eclipse, and so far, my program hasn't really grown past midget size. However I've already run into an issue with an Out of Memory error. You see, I've been using graphics comprised solely of bitmaps and PNGs in this program, and recently, when I tried to add a little bit more functionality to the program (mainly including a few more bitmaps and causing an extra sprite to be created), it started crashing in the graphics thread's constructor - sprite's constructor. When I tracked the problem down, it turned out to be an Out of Memory error that is seemingly caused by adding too many picture files to the program and creating Drawables out of them. This would be a problem, as I really don't have that many picture resources worked into that program...maybe 20 or so. I haven't even started to include sound yet. These images aren't all that fancy. My questions are this: 1) Are programs for the Android phone really that limited on how much memory they can employ, or is it probably something other than the 20-30 resource pictures causing that error? 2) If the memory for Android apps is so awful it can't even handle 20-30 picture resources being loaded into Drawables that exist at the same time, then how in the world are you supposed to make decent graphics and sound for that thing? Thanks.

    Read the article

  • How to test a localized WPF application in visual studio 2012

    - by Michel Keijzers
    I am trying to create a localized application in C# / WPF in Visual Studio 2012. For that I created two resource files and changed one string in a (XAML) window to use the resource files (instead of a hardcoded string). I see the English text from the resource file, which is correct. However, I want to check if the other resource file (fr-FR) also works but I cannot find a setting or procedure how to change my 'project' to run in French. Thanks in advance.

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • Dell Management Packs in System Center Operations Manager 2007 R2?

    - by bwerks
    Hey all, I recently set up SCOM in a small business network environment. The root management server is a Dell Poweredge 2950, and I'd like to use SCOM to monitor it using Dell's management packs. I've imported the management packs into the SCOM deployment and followed Dell's installation instructions, but it doesn't seem to be fully working yet. Currently, the Diagram views in the Dell tree (Monitoring tab) seem to show me the server's place in the network topology, so it seems that at least part of it is working. However, none of the reports under "Performance and Power Monitoring Views" provide any information. When clicking on one of them (Power Consumption (Watts), for instance), the display area is blank and there is a tooltip visible that reads "No performance counter is selected. To select a counter, place a check mark in the Show column in legend below." However, in the legend, there's nothing there for me to check. I've installed OpenManage 6.2 on the server as per the Dell documentation, but I don't know what else I could have done that I missed. Does this sound like a familiar problem to anyone?

    Read the article

  • How can I control disk numbering (enumeration) in Windows 7 Disk Management?

    - by tim11g
    A desktop system had two drives (Assigned C and D, which were enumerated in Disk Management as Disk 0 and Disk 1). A new SSD was added as the boot drive, after copying the C drive to the SSD. The SSD was connected to SATA 0 (master) port on the motherboard. The previous C Drive was moved to SATA 2 and is reformatted as a non-booting NTFS partition. The D drive remained on SATA 1. The system boots and everything seems fine. I was able to manually adjust the Drive Letters. However, the list in Disk Management is re-ordered. Disk 0 is the the previous Disk 2 (D Drive) on SATA 1, Disk 1 is the new Boot Drive (now C) on SATA 0, and Disk 2 is the former C Drive (now assigned E) on SATA 2. Does the Disk 0, 1, 2, designation mean anything? I would prefer to have them display in Disk Management as Drives C, D, and E from top to bottom. Is the Disk enumeration based on the SATA port or something else? (If it was based on SATA Port, they should be ordered C, D, E. Is there any way to re-order the Disk number assignments? What actually does determine the Disk number enumeration?

    Read the article

  • Handling conflicting priorities and expectations in project development

    - by jasonk
    There are any number of situations in the standard day where priority conflicts exist for projects. Management wants maximum productivity from employees. Marketing wants maximum salability and fast turnaround. Ownership wants maximum profit. Customers want usability and low cost. Regardless of the origin of the demands, time and money are always the limiting factor in business. Sometimes project elements have intrinsic or goodwill benefits for which there is not a hard, fast way to measure with monetary means (e.g. arguments for an attractive UI appeal vs. functional but plain). Other elements of software may have a method of providing “mental breaks” or motivating “cool factor” for developers that can get them back on track on other bigger, complex issues. While they may sidetrack the project short term they may have greater results long term through improved job satisfaction, etc. Continued training is a must, but working it in can setup back progress. What are your suggestions for setting priorities? How do you evaluate requests/demands on your projects? What are your suggestions for communicating and passing those on to your team in a way that they stay focused?

    Read the article

  • [Cocoa] Can't find leak in my code.

    - by ryyst
    Hi, I've been spending the last few hours trying to find the memory leak in my code. Here it is: NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; expression = [expression stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; // expression is an NSString object. NSArray *arguments = [NSArray arrayWithObjects:expression, [@"~/Desktop/file.txt" stringByExpandingTildeInPath], @"-n", @"--line-number", nil]; NSPipe *outPipe = [[NSPipe alloc] init]; NSTask *task = [[NSTask alloc] init]; [task setLaunchPath:@"/usr/bin/grep"]; [task setArguments:arguments]; [task setStandardOutput:outPipe]; [outPipe release]; [task launch]; NSData *data = [[outPipe fileHandleForReading] readDataToEndOfFile]; [task waitUntilExit]; [task release]; NSString *string = [[NSString alloc] initWithBytes:[data bytes] length:[data length] encoding:NSUTF8StringEncoding]; string = [string stringByReplacingOccurrencesOfString:@"\r" withString:@""]; int linesNum = 0; NSMutableArray *possibleMatches = [[NSMutableArray alloc] init]; if ([string length] > 0) { NSArray *lines = [string componentsSeparatedByString:@"\n"]; linesNum = [lines count]; for (int i = 0; i < [lines count]; i++) { NSString *currentLine = [lines objectAtIndex:i]; NSArray *values = [currentLine componentsSeparatedByString:@"\t"]; if ([values count] == 20) [possibleMatches addObject:currentLine]; } } [string release]; [pool release]; return [possibleMatches autorelease]; I tried to follow the few basic rules of Cocoa memory management, but somehow there still seems to be a leak, I believe it's an array that's leaking. It's noticeable if possibleMatches is large. You can try the code by using any large file as "~/Desktop/file.txt" and as expression something that yields many results when grep-ing. What's the mistake I'm making? Thanks for any help! -- Ry

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco CMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • WPF Spellcheck Engine takes up too much memory.

    - by Matt H.
    Each datatemplate in my WPF ItemsControl contains FIVE custom bindable richtextbox controls. It is a data-driven app that for authoring multiple-choice questions -- The question and four answer choices must all support: 1) Spell check 2) Rich formatting (otherwise I'd use regular textboxes) The spell check object in .NET 4 has a Friend constructor that takes a single argument of owner As TextBoxBase This means every richtextbox in the ItemsControl has 5 Spellcheck objects! This is the problem -- every spell check engine is consuming about 500k memory. So after you favor in the spellcheck, bindings, additional controls in the DataTemplate, etc.. a single multi choice question consumes more than 3MB memory. Users with 100--200 questions will quickly see the App raise it's memory consumption to 500+ MB. Management is definitely not OK with this. Is there a way to minimimze this problem? The best suggestion I've heard is to enable/disable spellcheck if the richtextbox is in the ItemsControl's scrollviewer: I haven't gotten an answer to how to go about it: (http://stackoverflow.com/questions/2869012/possible-to-implement-an-isviewportvisible-dependencyproperty-for-an-item-in-an-i) Any good ideas?

    Read the article

  • Does Team Foundation support cross-app workitem groups?

    - by drachenstern
    We're currently using Visual Source Safe and BugNet and looking to migrate up and away from VSS. I've been pushing for either SVN ( a) we're an ASP.NET shop, b) DCVS is not an option - no matter how much I like Hg ;-) or TFS. Well we finally got a new dev server, so I talked the boss into installing TFS on it (30 day trial). In the meantime, we had started experimenting with FogBugz. We really like FogBugz for about 80% of what we want to do, and the other 20% is probably stuff that we don't know what we want. I'm pushing for TFS because it allows for IDE integrated (mostly) everything. He's pushing for FogBugz because he can group tasks by customer and then project and manage everything from one dashboard. (which means I lose most of my IDE integration - no huge loss I agree) Does TFS support a single dashboard that would span all our solutions (in this case each solution is a full app that we sell to a vertical market client) and let us assign workitems to each solution-spanning-group? So for instance I think we envision something like this: PROJECT1 - Bugtracker and workitems PROJECT2 - Bugtracker and workitems PROJECT3 - Bugtracker and workitems CUSTOMER1 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT2) CUSTOMER2 - Deployment schedules, required features, specific notes (Uses PROJECT2, PROJECT3) CUSTOMER3 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT3) Hopefully that makes sense. naturally it's more complicated than this but I think I've given the details enough to paint a picture. I offered the option of creating dummy projects per customer but he doesn't like that and it doesn't really give us the single dashboard view that we're hoping to end up with (and that FogBugz as we've sorta implmented things does do now). Has anyone got a good suggestion on a management app that would accomplish what both of us want?

    Read the article

  • What are the reasons to store documents into DBMS when using Alfresco DMS

    - by Julia
    Hello guys! I have interview for an internship with company that wants to implement document management system and they are considering on the first place open source solutions, their top choice being Alfresco, but decision is still not final, part of my work there would be to investigate is Alfresco the best solution. What I have seen from project description, is that they would implement Alfresco with MySQL database, and not to use DBMS just for document metadata and indexing, but they actually want to store documents inside. By company profile, type of documents would be mostly PDF and .doc, not images. I have researched a bit, and I have read all the topics here related to storing files into the database, not to duplicate a question. So from what I understand, storing BLOBS is generally not recomendable, and by the profile of the company and their legal obligations with archiving, I see they will have to store larger amount of docs. I would like to be ready as much as I can for the interview and that is why I would like your opinion on these questions: 1) What will be your reasons for deciding to store documents into the DBMS, (especially having in mind that you are installing Alfresco, which stores files in the FS)??? 2) Do you have any experiences with storing documents into the MySQL database specifically??? All the help is very much appreciated, I am really excited about interview and really want this internship, so this is one of things i really want to understand before!! Thank you!!!!

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • WordPerfect programmers refusing to use anything but assembler

    - by Totophil
    There is a version (popularised by Joel Spolsky) attributing the demise of WordPerfect to a refusal of its programmers to use anything but assembler that led to delay of the first WPwin release and as result eventually to losing the all important battle with Microsoft. There are a few references to programming work being done using assembler in the autobiographical book "Almost Perfect" by W. E. Pete Peterson who used to have a major influence at running the corporation. But these references go back to early 80's when WordPerfect was trying to gain a significant market share by defeating WordStar and not early nineties when the battle with MS took place. I am looking for a second independent source to confirm the assumption. Maybe someone who worked for WordPerfect Corporation at a time, who was close to the company, or had a chance to see the source could clarify the issue. Your help is much appreciated, thanks! Please note that this question is not about any other theories or reasons behind WordPerfect demise. I really just need to clarify whether they used assembler as a primary language for WPwin and (as a bonus really) whether there were discussions held within the corporation about assembler being the right choice. Concisely: Did WPCorp use assembler as a primary language for WPwin? Were discussions held at a time amongst WP Corp staff about assembler being the right choice (was it management or programmers decision)?

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Memory usage in Flash / Flex / AS3

    - by ggambett
    I'm having some trouble with memory management in a flash app. Memory usage grows quite a bit, and I've tracked it down to the way I load assets. I embed several raster images in a class Embedded, like this [Embed(source="/home/gabriel/text_hard.jpg")] public static var ASSET_text_hard_DOT_jpg : Class; I then instance the assets this way var pClass : Class = Embedded[sResource] as Class; return new pClass() as Bitmap; At this point, memory usage goes up, which is perfectly normal. However, nulling all the references to the object doesn't free the memory. Based on this behavior, looks like the flash player is creating an instance of the class the first time I request it, but never ever releases it - not without references, calling System.gc(), doing the double LocalConnection trick, or calling dispose() on the BitmapData objects. Of course, this is very undesirable - memory usage would grow until everything in the SWFs is instanced, regardless of whether I stopped using some asset long ago. Is my analysis correct? Can anything be done to fix this?

    Read the article

  • Free Memory Occupied by Std List, Vector, Map etc

    - by Graviton
    Coming from a C# background, I have only vaguest idea on memory management on C++-- all I know is that I would have to free the memory manually. As a result my C++ code is written in such a way that objects of the type std::vector, std::list, std::map are freely instantiated, used, but not freed. I didn't realize this point until I am almost done with my programs, now my code is consisted of the following kinds of patterns: struct Point_2 { double x; double y; }; struct Point_3 { double x; double y; double z; }; list<list<Point_2>> Computation::ComputationJob(list<Point_3> pts3D, vector<Point_2> vectors) { map<Point_2, double> pt2DMap=ConstructPointMap(pts3D); vector<Point_2> vectorList = ConstructVectors(vectors); list<list<Point_2>> faceList2D=ConstructPoints(vectorList , pt2DMap); return faceList2D; } My question is, must I free every.single.one of the list usage ( in the above example, this means that I would have to free pt2DMap, vectorList and faceList2D)? That would be very tedious! I might just as well rewrite my Computation class so that it is less prone to memory leak. Any idea how to fix this?

    Read the article

  • Web CMS That Outputs to Flat Static Pages (.html) via FTP to Remote Server?

    - by Sootah
    I have a web app project that I will be starting to work on shortly. One of the features included is going to be a content management system where users can add content and then that content will be combined with a template and then output as a regular .html file. This .html file would then be FTPed to their own web host. As I've always believed in not reinventing the wheel I figured I'd see if there are any quality customizable CMSes out there that do this already do this. For instance, Blogger.com allows you to post all of your content to your account there; but offers the option to let you use your own hosting. Any time you publish a new article then a new .html page is generated (as well as an updated index page with links to the new article) and then the updated content is FTPed to your own server. What I would like is something like this that I can modify to more closely suit my needs. Required Features: Able to host on my own server Written in PHP Users add content through their account, then when posted it is FTPed as .html to their server Any appropriate pages are also updated to link to the new content (like the index page or whatnot) Templateable Customizable Optional (but very much desired) features: Written in CodeIgniter or a similar PHP framework While CodeIgniter isn't strictly required, I would very much prefer it. It speeds up development time and makes things much easier to implement. So - any suggestions? I've stumbled across a few CMSes that push to remote servers as static pages, but the ones I've found all are hosted on the developers servers which means that I cannot modify it at all. Thanks again fellow StackOverflowians! -Sootah

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >