Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 53/189 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Transparent Data Encryption (TDE) in SQL Server

    There are several ways to implement encryption in SQL Server; Arshad Ali focuses on Transparent Data Encryption (TDE), which was introduced in SQL Server 2008 and is available in later releases. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • SLOB: ?????????????

    - by katsumii
    Oracle DB????????????????????????????Introducing SLOB – The Silly Little Oracle Benchmark « Kevin Closson's Blog: Platforms, Databases and StorageSLOB supports testing Oracle logical read (SGA buffer gets) scalingSLOB supports testing physical random single-block reads (db file sequential read)SLOB supports testing random single block writes (DBWR flushing capacity)SLOB supports testing extreme REDO logging I/O????????????????Oracle?????????Swingbench ??????????IPC Semaphore?????C???????????????????Windows???????????Cygwin??????????????????????????????SwingbenchSwingbench can be used to demonstrate and test technologies such as Real Application Clusters, Online table rebuilds, Standby databases, Online backup and recovery etc.???????I/O?????????????????Oracle ORION DownloadsORION (Oracle I/O Calibration Tool) is a standalone tool for calibrating the I/O performance for storage systemsSLOB ??????????????????????????? 

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • When running PowerShell script as a scheduled task some Exchange 2010 database properties are null

    - by barophobia
    Hello, I've written a script that intends to retrieve the DatabaseSize of a database from Exchange 2010. I created a new AD user for this script to run under as a scheduled task. I gave this user admin rights to the Exchange Organization (as a last resort during my testing) and local admin rights on the Exchange machine. When I run this script manually by starting powershell (with runas /noprofile /user:domain\user powershell) everything works fine. All the database properties are available. When I run the script as a scheduled task a lot of the properties are null including the one I want: DatabaseSize. I've also tried running the script as the domain admin account with the same results. There must be something different in the two contexts but I can't figure out what it is. My script: Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010 Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Starting script" $databases = get-mailboxdatabase -status if($databases -ne $null) { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Object created" $databasesize_text = $databases.databasesize.tomb().tostring() if($databasesize_text -ne $null) { $output = "echo "+$databasesize_text+":ok" Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path check" if(test-path "\\mon-01\prtgsensors\EXE\") { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Path valid" Set-Content \\mon-01\prtgsensors\EXE\ex-05_db_size.bat -value $output } Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "Exiting program" } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "databasesize_text is empty. nothing to do" } } else { Write-EventLog 'Windows PowerShell' -source PowerShell -eventid 100 -message "object not created. nothing to do" } exit 0

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • split virtualization design based on environment or server role?

    - by Dan
    I'm setting up the server environment for a new software development group, which will include 4 test environments. These are web applications, so each environment will have an application server and a database server. I'm planning on buying two physical servers (e.g. 6-core CPU each with 12GB or so of RAM), and I'm thinking virtualization is appropriate here. With that in mind, I've thought of a couple ways that I could organize the virtualization strategy: - Separated by server role: Server 1 has all the application servers, each in their own guest VM. Server 2 has all the databases. OR - Separated by environment: Server 1 has a VM for two of the environments, with the VM containing both the app server and the database server. Server 2 would also contain two test environments, with the same style (app server and database in same VM). The advantages I see with all the app servers on one server and all the databases on another server is that I could probably be more efficient with the database server (one instance running multiple databases). But the other option seems easier to manage (archives/restorations would be contained in a single VM). Any recommendations? TIA.

    Read the article

  • IIS 7.5 fails to open database after computer machine on that database server is working restarts.

    - by Jenea
    Hi. I decided to post this question also here in case the issues we have is related to sql server. There is a problem that bother me for some time. I have an asp.net mvc that uses NHibernate for modeling the database. The infrastructure is the following: Windows 2008 R2 for all virtual machines. IIS 7.5 is working on one virtual machine. Sql Server 2008 is working on another virtual machine. We have couple of databases, two that stores application data and one that registers all unhandled exceptions. Sometimes virtual machine that hosts database server restarts (in the middle of the night, not quite sure about the reason) after that connection to the databases that stores application data is not working and as result there are thousands of unhandled exceptions that get registered in the third database. Important to mention that databases are accessible from Management Studio. The problem is solved by resetting IIS. Connetion are handled via NHibernateUtil class which opens and closes session at each request.

    Read the article

  • How to Set up MySQL Server to utilize more memory

    - by Cyril Gupta
    Hi there, I have MySQL setup on Windows along with Plesk. The version is 5.0.45 Community. The databases I have on the server are MyISAM as well as InnoDb, but predominantly innodb. I had 8G memory on my server, but MySQL isn't going up more than 1.3G and tweaking the settings isn't helping. I tried to increase the memory allocation for innodb_buffer_pool_size, it works if I set it up to 1G, but if I set 2G, or above the server doesn't come back online! I want mySQL to use at least 5-6 Gigs of the memory I have for performance, but I can't get this to work. Can anyone please help? My mysql config file is below (there are 2 mysqld sections... when i used MySQL workbench it created another one!) [MySQLD] port=3306 basedir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL datadir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL\\Data default-character-set=latin1 default-storage-engine=INNODB query_cache_size=128M table_cache=1024 tmp_table_size=32M thread_cache=32 myisam_max_sort_file_size=100G myisam_max_extra_sort_file_size=100G myisam_sort_buffer_size=2M key_buffer_size=32M read_buffer_size=16M read_rnd_buffer_size=2M sort_buffer_size=8M innodb_additional_mem_pool_size=24M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=10M innodb_buffer_pool_size=1G innodb_log_file_size=10M innodb_thread_concurrency=8 max_connections=700 key_buffer=48M max_allowed_packet=5M sort_buffer=2M net_buffer_length=4K old_passwords=1 wait_timeout=20 connect_timeout=60 [client] port=3306 [mysqld] query_cache_min_res_unit = 4096 innodb_additional_mem_pool_size = 1048576 innodb_buffer_pool_size = 1G query_cache_limit = 1048576 key_buffer_size = 8388608 sort_buffer_size = 2097144 query_cache_type = 1 query_cache_size = 312M log-slow-queries connect_timeout = 5 wait_timeout = 20 thread_cache_size = 15 read_buffer_size = 131072 table_cache = 64

    Read the article

  • SQL Server Subscriber Migration

    - by SuperCoolMoss
    We're currently have one way transaction replication from a SQL Server 2005 OLTP publisher/distrbituor to two subscribers (one SQL 2005 and the other SQL2008 R2). Replication security is via the SQL Agents' domain service account (the same account is used on all boxes). The SQL2008R2 subscriber is used for BI purposes and hosts a database that has a subset of the Production publisher database tables, with different security and indexes. We need to migrate this BI subscriber to a newer box with more performant hardware. The plan is as follows: Stop replicating to the BI box (continue replicating to the other subscriber). Backup all databases on the BI box (including system databases). Restore all databases (including master in single user mode) to the new BI box (this has SQL Server 2008R2 already installed). Take the old BI box off the network and shut it down. Rename and Re-IP the new BI box to be the same as the old box. Switch replication back on. Are there any flaws in this approach?

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • Excel 2013: VLookup for cells that share common characters within cell but are both surrounded by other non-matching text

    - by Kylie Z
    I am pulling information from 2 different databases. The databases use different naming protocol for the exact same item/specified placement however they always have certain components of the name in common. The length of these names can vary throughout each of the databases (see the pic below) so I don't think counting characters would help. I need a formula (probably a vlookup/match/index of some sort) to pair up the names from the 2nd database name with the 1st database name and then place it in the adjacent column(B2) on sheet1. Until this point I've had to match, copy, and paste the pairs manually from one sheet to the other and it takes FOREVER. Any help would be much appreciated!!! For example: Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A13: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS728X90_728X90_DFA Common Factors: "ROSMSNAUTOSMASSACHUSETTS" & "728X90" Therefore A2 and A13 need to pair up In some cases, Database 1 and 2 will have a common name aspect but sizing will be different. They need to have BOTH aspects in common in order to be paired so I would NOT want the below example to pair up. Database1 Name in Sheet1,A2: 728x90_Allstate_629930_ALL_JUL_2013_MASSACHUSETTSAUTO_BAN_MSN_ROSMSNAUTOSMASSACHUSETTS_7.2.13 Database2 Name in Sheet2, A12: BAN_MSN_ROSMSNAUTOSMASSACHUSETTS300X250_300X250_DFA Common Factor: Only "ROSMSNAUTOSMASSACHUSETTS" matches. "728x90" is not equal to "300X250" - Sizing is different so they should not be paired.

    Read the article

  • Software Testing Humor

    - by mbcrump
    I usually don’t share these kind of things unless it really makes me laugh. At least, I can provide a link to a free eBook on the Pablo’s S.O.L.I.D principles eBook. S.O.L.I.D. is a collection of best-practice object-oriented design principles that you can apply to your design to accomplish various desirable goals like loose-coupling, higher maintainability, intuitive location of interesting code, etc You may also want to check out the Pablo’s 31 Days of Refactoring eBook as well.

    Read the article

  • Silverlight Cream for March 08, 2010 -- #809

    - by Dave Campbell
    In this Issue: Michael Washington, Tim Greenfield, Bobby Diaz(-2-), Glenn Block(-2-), Nikhil Kothari, Jianqiang Bao(-2-), and Christopher Bennage. Shoutouts: Adam Kinney announced a Big update for the Project Rosetta site today Arpit Gupta has opened a new blog with a great logo: I think therefore I am dangerous :) From SilverlightCream.com: DotNetNuke Silverlight Traffic Module If it's DNN and Silverlight, it has to be my buddy Michael Washington :) ... Michael has combined those stunning gauges you've seen with website traffic... just too cool!... grab the code and display yours too! Cool demonstration of Silverlight VideoBrush This is a no-code post by Tim Greenfield, but I like the UX on this Jigsaw Puzzle page... and you can make your own. Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1 Bobby Diaz has an informative post up on combining earthquake data with BingMaps in Silverlight 3... check it out, the grab the recently posted Live Demo and Source Code Adding Volcanos and Options - Earthquake Locator, part 2 Bobby Diaz also added volcanic activity to his earthquake BinMaps app, and updated the downloadable code and live demo. Building Hello MEF – Part IV – DeploymentCatalog Glenn Block posted a pair of MEF posts yesterday... made me think I missed one :) .. the first one is about the DeploymentCatalog. Note he is going to be using the CodePlex bits in his posts. Building HelloMEF – Part V – Refactoring to ViewModel Glenn Block's part V is about MEF and MVVM -- no, really! ... he is refactoring MVVM into the app with a nod to Josh Smith and Laurent Bugnion... get your head around this... The Case for ViewModel Nikhil Kothari has a post up about the ViewModel, and how it facilitates designer/developer workflow, jumpstarts development, improves scaling, and makes asynch programming development simpler MMORPG programming in Silverlight Tutorial (12)Map Instance (Part I) Jianqiang Bao has part 12 of his MMORPG game up... this one is showing how to deal with obstuctions on maps. MMORPG programming in Silverlight Tutorial (13)Perfect moving mechanism Jianqiang Bao also has part 13 up, and this second one is about sprite movement around the obstructions. 1 Simple Step for Commanding in Silverlight Christopher Bennage blogged about Commanding in Silverlight, he begins with a blog post about commands in Silverlight 4 then goes on to demonstrate the Caliburn way of doing commanding. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    MIX10

    Read the article

  • Starting this week: Dublin, Maidenhead, and London

    - by KKline
    This might be most most overcommitted four-week period of time ever in my life. I’m tired just thinking about it! Not only am I traveling internationally and speaking over the next few weeks, I’m also helping on two book projects, learning some new applications from Quest Software, and helping on a small Transact-SQL refactoring project. Swag on hand? I’ve got a special printing of 500 video training DVDs for this trip: SQL Server Training on DMVs Performance Monitor and Wait Events Plus, I’ll have...(read more)

    Read the article

  • A toolset for self improvement and learning [closed]

    - by Sebastian
    Possible Duplicate: I’m having trouble learning I've been working as an IT consultant for 1½ years and I am very passionate about programming. Before that I studied MSc Software Engineering and had both a part time job as a developer for a big telecom company. During that time I also took extra courses and earned a SCJP certificate. I have been continuously reading a lot of books during the last 3½ years. Now to my problem. I want to continue learning and become a really, really good developer. Apart from my daytime job as a full time java developer I have taken university courses in, for me, new languages and paradigms. Most recently, android game development and then functional programming with Scala. I've read books, went to conferences and had a couple of presentations for internal training purposes in our local office. I want to have some advice from other people who have previously been in my situation or currently are. What are you guys doing to keep improving yourselves? Here is some things that I have found are working for me: Reading books I've mostly read books about best practices for programming, OO-design, refactoring, design patterns, tdd. Software craftmanship if you like. I keep a reading list and my current book is Apprenticeship patterns. Taking courses In my country we have a really good system for taking online distance courses. I have also taken one course at coursera.org and a highly recommend that platform. Ive looked at courses at oreilly.com, industriallogic, javaspecialists.eu and they seem to be okay. If someone gives these type of courses a really good review, I can probably convince my boss. Workshops that span over a couple of days would probably be harder, but Ive seen that uncle Bob will have one about refactoring and tdd in 6months not far from here.. :) Are their possibly some online learning platforms that I dont know about? Educational videos I've bought uncle bobs videos from cleancoders.com and I highly recommend them. The only thing I dont like is that they are quite expensive and that he talks about astronomy for ~10 minutes in every episode. Getting certified I had a lot of fun and learned a lot when I studied for the SCJP. I have also done some preparation for the microsoft equivalent but never went for it. I think it is a good when selling yourself as a newly graduated student and also will boost your knowledge if your are interested in it. Now I would like others to start sharing their experiences and possibly give me some advice! BR Sebastian

    Read the article

  • « PHP Next Generation » : le noyau de la prochaine génération de PHP officialisé, le projet vise à intégrer un compilateur JIT

    « PHP Next Generation » : le noyau de la prochaine génération de PHP officialisé Le projet vise à intégrer un compilateur JITIl y a quelques jours, Dimitry Stogov, ingénieur chez Zend Technologies, dévoilait les résultats de ses travaux qui permettaient une optimisation des performances de PHP.Celui-ci avait procédé à un refactoring du code PHP, avec à la clé une augmentation des performances d'applications comme Wordpress 3.6 de 20% et Drupal 6.1 de 11,7%.Le projet avait pris naissance depuis...

    Read the article

  • Virtual Brown Bag: Ruby Newbies, Mockups, There *is* an I in SOLID, fuv

    - by Brian Schroer
    At this week's Virtual Brown Bag meeting: Claudio pointed us to Try Ruby! and Rails For Zombies, two sites to educate Ruby newbies We looked at the free version of Balsamiq, and other online mockup sites George walked us through a refactoring to isolate roles and adhere to the Interface Segregation Principle (the "I" in SOLID) We laughed at fuv, the code editor for "real programmers" For detailed notes, links, and the video recording, go to the VBB wiki page: https://sites.google.com/site/vbbwiki/main_page/2011-02-10

    Read the article

  • What are good places to get programming online certifications?

    - by Oscar Mederos
    I was reading Am I unhireable? and one of the suggestions @JohnFx gave was getting a few certifications. What is a good place to get online developer certifications? Please, provide info about the site if you used it. I already have some certs in ExpertRating site. Note: It could be certifications about programming in general (methodologies -XP, Scrum, etc-, Modeling, Design Patterns, Architecture, Refactoring, etc.)

    Read the article

  • Working with Legacy code #4 : Remove the hard dependencies

    - by andrewstopford
    During your refactoring cycle you should be seeking out the hard dependencies that the code may have, examples of these can include. File System Database Network (HTTP) Application Server (Crystal) Classes that service these kind (or code that can be abstracted to a class) of these kind of dependencies should be wrapped in an interface for easier mocking. If you team starts refering to the interface version of these classes the hard dependency will over time work it's self free.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >