Search Results

Search found 589 results on 24 pages for 'pascal ghost'.

Page 8/24 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Online Code Editor [closed]

    - by Velvet Ghost
    The major online IDEs are hosted on the service provider's server. Examples are Kodingen, Cloud9, ShiftEdit. Hence they would be unavailable if the external server was down for some reason, and I prefer to do my computing on my own computer anyway. Does anyone know of an online IDE or editor (preferably just an editor - a simple implementation of the Ace or CodeMirror JS editors) which can be downloaded and run on localhost (on a local LAMP server)? I've found two so far - Eclipse Orion and Wiode, but I don't like either of them very much, and I'm looking for alternatives. Also suitable are browser extensions which run natively on the browser (offline) without going to some external site. An example would be SourceKit for Chrom(e/ium).

    Read the article

  • How can I rescue a Lubuntu install?

    - by Ghost
    Quick recap: I was having a problem with hibernation so I check and the linuxswap partition is missing, showing an "unknown" chunk of drive where it was. Happened before, booted to the liveCD and used Gparted to reformat that partition back to swap. Then I boot........F---- grub rescue... MBR took care of the problem, except that now I'm back to Windows only. EVERY guide out there makes me reinstall Lubuntu from scratch, a waste of time considering it will take me at least a day to reinstall everything there. Can't I just fix grub like I did with the win MBR?

    Read the article

  • Installing 12.10 messed up Xorg

    - by Ghost
    So I finally decided to upgrade from 12.04 and now my ubuntu install has been rendered unusable. I'm getting the "no screens detected" error from xorg in the console, and no mattr how many times I reinstall it xorg keeps throwing the same errors. I already had to do a bunch of hacks like nomodeset just to get to the console, else I would get an infinite tabulator after the purple boot screen. The machine has a 4200HD IGP, I even tried installing fglrx legacy (4xxx series and below are not supported in regular fglrx anymore) which was a subpar driver in 12.04 but in hopes that the machine would at least work, but nope, nothing at all. Anyone had the same problem? how do I fix it? EDIT: just wanted to add that I upgraded to 13.04 and it didn't solve anything, in fact it might have broken things even further.

    Read the article

  • Really weird situation with Swap that refuses to work

    - by Ghost
    So I decide to move my Swap partition to another HDD to make hibernation more fast, I create a partition with a liveUSB, do swapon, and then hibernate just disappears I go into the terminal, reactivate hibernate (so the button shows) for some reason the swap wasn't "on" so I do swapon again. Still not working. I go back to the terminal and edit the file with the UUID of the new swap partition. The partition shows as swap on gparted, it's on, the file has the right UUID, even the task manager shows a swap area of 10GB, and there is a hibernate button on the shutdown window. But it-wont-hibernate! What's going on?

    Read the article

  • How can I format my active hard drive to NTFS?

    - by Ghost
    Believe it or not, I'm not too happy with Ubuntu. Well, let me rephrase that. I like it, but the only thing I don't like about it is that it's too much of a hassle to get a game to work. I'm trying to install Windows 7 with a 4GB flash drive, but my error that comes up is that my hard drive I'm trying to install on is in ext4. I need to format it to read NTFS. I can't seem to find any topics on how to format an active hard drive. I found a topic that explains how to move Ubuntu to a new drive, but it's a bit confusing to me. Please help! (Please don't disregard this topic just because I want to go back to windows)

    Read the article

  • Massive crash and kernel panic after updates yesterday, what now?

    - by Ghost
    Got 12.04 on an AMD+radeon machine, everything was running fine until last night when the update manager installed new packages, and now the moment I open a browser (firefox, chromium, anything) it crashes instantly. It says warning: at build/buildd/linux-3.5.0/arch/x86/kernel/smp.c:123 native_smp_send_reschedule+0x5b/0x60() Pid: 2432, comm: chrome_dbthread tainted: G D 3.5.0-2generic #2-ubuntu I already tried uninstalling the packages from the update log, didn't solve a thing. Depending of which browser I use I get a different reaction, like a complete freeze instead of a text console crash. Ideas? I really don't know what to do.

    Read the article

  • Which is the best low/mid level GPU in terms of compatibility for 12.04 and 12.10?

    - by Ghost
    I'm tired of dealing with the disaster that is catalyst on a 4xxx IGP and I want to buy some discrete graphics that support both Unity and alternative environments like GNOME3 and Mate-Cinnamon without getting sluggish. Besides that, 1080p video and some light 3D gaming. Again nothing crazy, I'm not a designer and I wont be using stuff like Blender. One thing I want to make clear is that I want a GPU that is 100% compatible, that means no bugs, no glitches, not having to toggle between propietary and Xorg because it decided to crapout on me. Thanks for your help

    Read the article

  • Adding Apache common dependency to Play Framework 2.0

    - by Mooh
    i want to import org.apache.commons.io but i'm getting this error: [info] Compiling 1 Java source to /home/ghost/Bureau/app/play-2.0.1/waf/target/scala-2.9.1/classes... [error] /home/ghost/Bureau/app/play-2.0.1/waf/app/controllers/Application.java:9: error: package org.apache.commons.io does not exist [error] import org.apache.commons.io.*; [error] ^ [error] /home/ghost/Bureau/app/play-2.0.1/waf/app/controllers/Application.java:41: error: cannot find symbol [error] FileUtils.copyFile(file, destinationFile); [error] ^ [error] symbol: variable FileUtils [error] location: class Application [error] 2 errors [error] {file:/home/ghost/Bureau/app/play-2.0.1/waf/}waf/compile:compile: javac returned nonzero exit code [error] application - Play can't find package org.apache.commons.io . How i can i add it as a dependency ?

    Read the article

  • Friday Fun: Ghosruns

    - by Asian Angel
    In this week’s game a huge ghost is on the loose and chasing after your group of humans. Can you successfully beat the Match-3 challenge on each level or will your group become this ghost’s newest friends for eternity? HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Ways to serve AWS from another domain

    - by mplungjan
    I have installed Ghost on AWS (it is running node) I very much dislike the URL they gave me http://ec2-nn-nnn-nnn-nnn.us-west-2.compute.amazonaws.com/ghost/ I own a domain and linux hosting (but not a VPS) - what would be a practical way to serve my blog via URLS on my own (sub) domain? I can use php and access .htaccess on my domain - possibly do things on the ASW instance too (let me know what to look for)

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Windows Boot disc to save files?

    - by acidzombie24
    Somehow after updating my legit windows 7 OS with no pirated or mod software on my PC i was welcome to a black screen. I popped in my ghost disc and copied the files i need to an external HD. IIRC windows 7 disc can do that too. Problem with the way i did it on ghost was it excepted me to select 1 file (an HD disc image) so i couldnt select multiple folders to move. Also when i did move i had no idea if it finished or how long it would take. My linux live cd couldnt access the HD. Anyways, is there a disc i can use to easily copy files from my laptop to my external HD? I think ghost, windows 7 and windows server all allow me but is there one that is better suited to copy files?

    Read the article

  • Installing gnome shell extensions from extensions.gnome.org fails silently

    - by Pascal
    On a fresh Ubuntu install (12.04, 64-bit), after installing gnome-shell, I've tried to install some extensions from extensions.gnome.org but got no result. I've tried with Firefox and Chromium and got the same issue. Open any extension page on extensions.gnome.org. Switch extension to "ON". Agree with confirmation about installation. Nothing happens and nothing has been installed (.local/share/gnome-shell/extensions is empty). I've checked .xsession-errors, Firefox's javascript console, gnome-shell console errors (Alt-F2 + looking glass). There isn't any trace of any error.

    Read the article

  • expand window to free space on screen in kde

    - by Pascal Rosin
    I am using Kubuntu and I want to expand the current window to the free space on the screen or to say it more precisely: I want to make the current window as big as possible without overlapping new windows (windows already overlapped should be ignored). Is there a keyboard shortcut or an extension to the KDE Window management, that realizes such a shortcut or a window button? I would also appreciate a hint, how to write a script that could do this window thing on keyboard shortcut invocation. I am a programmer but don't know what the best way is to control KDE Windows via script.

    Read the article

  • Why does installing gnome shell extensions from extensions.gnome.org fail silently?

    - by Pascal
    On a fresh Ubuntu install (12.04, 64-bit), after installing gnome-shell, I've tried to install some extensions from extensions.gnome.org but got no result. I've tried with Firefox and Chromium and got the same issue. Open any extension page on extensions.gnome.org. Switch extension to "ON". Agree with confirmation about installation. Nothing happens and nothing has been installed (.local/share/gnome-shell/extensions is empty). I've checked .xsession-errors, Firefox's javascript console, gnome-shell console errors (Alt-F2 + looking glass). There isn't any trace of any error.

    Read the article

  • How a .NET Programmer learn Big Data/Hadoop? [on hold]

    - by Smith Pascal Jr.
    I have been ASP.NET developer for sometime now and I have been reading a lot about Big Data- Hadoop and its future as to how it is the next technology in IT and how it would be useful to create million of jobs in US and elsewhere in the world. Now since Hadoop is an open source big data tool which is managed by Apache Server Foundation Group, I'm assuming I have to be well aware of JAVA - Correct me if I'm wrong. Moreover, How a .NET programmer can learn Big Data and its related technologies and can work professionally full time into this technology? What challenges and opportunities does a .NET professional face while changing the technology platform? Please advice. Thanks

    Read the article

  • experience: coding on netbooks

    - by pascal
    HI, i want to buy a netbook for doing some stuff in the train. Can someone report how it is to code simple stuff on a netbook? 10/12". I wanted to buy a very cheap one. like 1gb ram 1,6ghz blabla. and run linux on it with apache. i will code with JS/PHP. and as IDE i'll be using notepad++. so nothing big like eclispe or something else. maybe later on eclipse for java, but that doesn't really matter. so first, would this setup work fine on such a netbook and, is it okay for coding? I don't style any homepages on the netbook, I just want to code. would be nice if someone can share his experience in that. thanks :)

    Read the article

  • Directing Multiple ccTLD's to 1 gTLD with a country specific subdirectory?

    - by Pascal Van Opzeeland
    We have multiple ccTLDomains and are thinking about how to best combine these into one. We want to do this to focus our link building efforts. We are running a website through which we offer a software-as-a-service. Therefore we could potentially sell to any country in the world. However, Germany is our most important market. We currently have a .com, .de, .nl. and .pl domain. All these domains have a high amount of unique content pages. What we are planning is to change everything to .com with language-based subdirectories, so .com/en/, .com/de/, etc. I have two questions concerning this issue: 1) How much of an advantage does a ccTLD have over a gTLD with country specific subdirectories in search rankings? So let’s say .de versus .com/de/? 2) How could we best redirect the visitors of our old ccTLD’s to our gTLD’s subdirectories? We would like to loose as few search engine rankings as possible. Thank you for your help.

    Read the article

  • VMWare Player pauses often

    - by pascal
    I'm using a 64bit Windows 8 inside vmplayer, with 2 virtual processor cores, virtual hard disk resides on a fast local disc and is not preallocated; host CPU is Intel i7 3770, should be capable of hardware virtualisation but I don't know if VMWare uses it; NAT networking; Sound card connected, USB connected, accelerated 3D graphics (NVidia 313.30 on host) My problem is, that the VM often pauses for a few seconds, and then speeds up for a few seconds to reach real time again. Time in the VM actually moves faster after the pause, for example all animations using timers speed up. When running, the vmware-vmx process shows ~150% CPU usage in top, but 0% when pausing (and D state i.e. waiting for IO). iotop shows normal disk writes from vmware-vmx threads, but during pauses, the flush kernel thread uses 99%. Are there some options to try so that VMWare doesn't wait for IO? I've tried a few things available from the GUI but the issue never went away…

    Read the article

  • Didn't you have problems with upgrade from 11.10 to 12.04 (libre office)?

    - by Pascal Paulus
    This is the first time I'm repporting something hoping that it can be usefull for you. When updating from 11.010 to 12.04 (what include updating Libre office I supose), I can't any more work with any document that was originally made in Libre office. Every change freezes the screen, I can't save anything... I'm talking of complex documents, with lots of internal references and footnotes and some propor text styles of about 230 pages (phd work) I wanted to alert you that probabely something is wrong but as I don't have any tecnical knowledge, I don't know what could be usefull to help you in your great job of making good free software. My little desktop has 2Gb of Ram memory and an atom processor (I can look for more details if that would be usefull to you)

    Read the article

  • Musique classique

    - by Pascal
    TRANSLATED Why isn't there a french version of this application? I can't access the genre of Classical Music, Opera, Chamber Music, etc...? Why isn't there a field for "Composer"? ORIGINAL Pourquoi n'existe-t-il pas de version française de cette application ? Ne peut-on accéder à un genre "musique classique", "opera", "musique de chambre", etc ? Pourquoi n'existe-t-il pas un champ "compositeur" ?

    Read the article

  • Software Center won't take into account my changed OpenID name: any idea?

    - by Pascal Av
    Failing to install IntelliJ from the Software Center, I realized my login is wrong in the /etc/apt/auth.conf entry that the install process generates. In this file, I see my original OpenID, that's the one which got automatically generated when signing up on Launchpad. It contains my last name so I changed it. I purged conf and binaries for Ubuntu One, reinstalled, deleted all listed "Devices" from app, all "Applications" from Launchpad. Deleted ~/.cache/software-center/, reboot, but still: When installing IntelliJ, the auth.conf file receives my original OpenID, not the modified one Problem is that the commercial subscription, for IntelliJ private PPA, uses my modified OpenID, so authentication attempt fails. I can't remove nor modify this subscription, even by changing back my OpenID into Launchpad. Any idea to solve this?

    Read the article

  • Unique SMS sender id?

    - by Pascal
    Hello, I want to build an app that send SMS to people. However, I want my users to know that the SMS comes from the app and nothing else so they can't fake it. Is there a way to guarantee that the sender ID is unique to my app? It seems that sending a SMS by phone is with a unique SENDER ID for each phone number. But, from what I read, I don't think it is the case when sending a SMS through a web gateway. Is this correct? I am not an expert in mobile phone security. Of course, I am willing to pay the price for a unique sender id, if such thing is possible. Regards, Pascal

    Read the article

  • how to configure svn between two remote system on different network?

    - by Ghost Answer
    The story is that I have two systems on two different network IPs (like client on 10.0.15.24 and server on 117.152.18.140). I am communicating between them by team viewer port 80. I have installed svn subversion on server and svn client on client system. I also installed apache server on server system. Now my question is that I want to configure them. first is it possible, if yes, then please give the proper suggestion. Is there any changes to be make in firewall on server or client or both. thanks

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >