Search Results

Search found 14037 results on 562 pages for 'master pages'.

Page 77/562 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • how much is a graduate Information Technology (IT) degree worth?

    - by T. Webster
    I have a bachelor's degree in IS/MIS not CS, and a few years of software development experience now. I want to pursue some sort of graduate education that will give me more career options. I was wondering if it's worth pursuing a graduate degree in Information Technology (not CS). -How much is a master's degree in Information Technology worth? By this I mean how marketable does it make you, and how valuable is the education itself. -How does its worth compare to a master's in CS? -What types of employers are looking for the IT master's degree vs. a Computer Science master's degree? -What's the most valuable thing an Information Technology degree can give, that someone wouldn't have without the degree? Thanks.

    Read the article

  • Why is there no Microsoft Certified Master program targetted at developers?

    - by Jason Coyne
    In the lower level certifications, developer technology is all over the place. At the highest level (Microsoft Certified Architect), the Solutions track appears to be a good fit for high level application designers and architects. MCA requires an MCM as a prerequisite. However, none of the MCM tracks are targeted towards development. Obviously to be a good architect you need to have knowledge of other technologies, servers, sql, messaging etc. But those seem like things that should be part of the course load, and not the sole focus. Are the lower tiers really as high as you can go for application focused professionals? For most developers, the SQL MCM seems to be the best fit. Are the MCM and MCA really targeted ad more administrators and not developers?

    Read the article

  • Fusion Product Hub for Supply Chain Management

    Oracle Fusion Product Hub is a key component of Oracle's Supply Chain and Master Data Management strategy. Using a revolutionary approach to managing product master data management processes, Product Hub delivers: 1) A unified and accurate product definition that is harmonized within and across the enterprise value chain 2) Flexible and robust Data Governance workflows and policies to govern product master data 3) Product Dashboard and Embedded Analytics to enable informed and quick decisions

    Read the article

  • I'm a PHP programmer, how easy will it be to "master" c++?

    - by ThinkingInBits
    I've been a PHP programmer for 8 or so years, I'm familiar with OOP and try to consider best practices whenever programming. I would like to pick up C++ to possibly enter the 'game development' field, where now I'm doing web dev. I'm not a school person, but was wondering what people think about self-taught math to complement learning c++ for game dev. The last math course I took was algebra 2. Should I start delving myself into pre-calculus? Are there any other suggestions you may have to ease the process?

    Read the article

  • Where do you earn more money (Autonomous Systems vs Distributed Systems)? [closed]

    - by Puckl
    I am interested in both topics and I can choose between them for my computer science master. I think the distributed systems master focuses more on software technologies and the autononmous systems master is focused on robotics and machine learning. Do you get good jobs in the fild of machine learning without a Ph.D.? I guess there are more jobs available in the Software-Tech world, is this right? Where do you earn more money? (It is not the only criteria, but it matters)

    Read the article

  • c# How to make linq master detail query for 0..n relationship?

    - by JK
    Given a classic DB structure of Orders has zero or more OrderLines and OrderLine has exactly one Product, how do I write a linq query to express this? The output would be OrderNumber - OrderLine - Product Name Order-1 null null // (this order has no lines) Order-2 1 Red widget I tried this query but is not getting the orders with no lines var model = (from po in Orders from line in po.OrderLines select new { OrderNumber = po.Id, OrderLine = line.LineNumber, ProductName = line.Product.ProductDescription, } ) I think that the 2nd from is limiting the query to only those that have OrderLines, but I dont know another way to express it. LINQ is very non-obvious if you ask me!

    Read the article

  • Oracle: How to update master with newest row from detail table?

    - by LukLed
    We have two tables: Vehicle: Id, RegistrationNumber, LastAllocationUserName, LastAllocationDate, LastAllocationId Allocations: Id, VehicleId, UserName, Date What is the most efficient (easiest) way to update every row in Vehicle table with newest allocation? In SQL Server I would use UPDATE FROM and join every Vehicle with newest Allocation. Oracle doesn't have UPDATE FROM. How do you do it in Oracle?

    Read the article

  • Change master table PK and update related table FK (changing PK from Autoincrement to UUID on Mysql)

    - by eleonzx
    I have two related tables: Groups and Clients. Clients belongs to Groups so I have a foreign key "group_id" that references the group a client belongs to. I'm changing the Group id from an autoincrement to a UUID. So what I need is to generate a UUID for each Group and update the Clients table at once to reflect the changes and keep the records related. Is there a way to do this with multiple-table update on MySQL? Adding tables definitions for clarification. CREATE TABLE `groups` ( `id` char(36) NOT NULL, `name` varchar(255) DEFAULT NULL, `created` datetime DEFAULT NULL, `modified` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$ CREATE TABLE `clients` ( `id` char(36) NOT NULL, `name` varchar(255) NOT NULL, `group_id` char(36) DEFAULT NULL, `active` tinyint(1) DEFAULT '1' PRIMARY KEY (`id`), KEY `fkgp` (`group_id`), CONSTRAINT `fkgp` FOREIGN KEY (`group_id`) REFERENCES `groups` (`id`) ON DELETE NO ACTION ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8$$

    Read the article

  • Git dont sync files untill committed even if checkout different branch

    - by DertWaiter
    Ok i have git 1.7.11.1 on windows and i have local test reposotory with 2 branches one is master with index.php help.php then i create another branch called slave :) I run from git bash rm help.php and it dissapears from the folder, but i dont stage anything. I switch to checkout master branch and it supposed to restore file help.php because its not modified in master branch isnt it? And it does not do it. When i back to slave branch and commit and then switch to checkout master then help.php appears. is that the way it supposed to be why?

    Read the article

  • Unable to get my master & details gridview to work.

    - by Javier
    I'm unable to get this to work. I'm very new at programming and would appreciate any help on this. <%@ Page Language="C#" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { } protected void DataGridSqlDataSource_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:SqlDataSource ID="DataGrid2SqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:JobPostings1ConnectionString %>" SelectCommand="SELECT [Jobs_PK], [Position_Title], [Educ_Level], [Grade], [JP_Description], [Job_Status], [Position_ID] FROM [Jobs]" FilterExpression="Jobs_PK='@Jobs_PK'"> <filterparameters> <asp:ControlParameter Name="Jobs_PK" ControlId="GridView1" PropertyName="SelectedValue" /> </filterparameters> </asp:SqlDataSource> <asp:SqlDataSource ID="DataGridSqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:JobPostings1ConnectionString %>" SelectCommand="SELECT [Position_Title], [Jobs_PK] FROM [Jobs]" onselecting="DataGridSqlDataSource_Selecting"> </asp:SqlDataSource> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="Jobs_PK" DataSourceID="DataGridSqlDataSource" AllowPaging="True" AutoGenerateSelectButton="True" SelectedIndex="0" Width="100px"> <Columns> <asp:BoundField DataField="Position_Title" HeaderText="Position_Title" SortExpression="Position_Title" /> <asp:BoundField DataField="Jobs_PK" HeaderText="Jobs_PK" InsertVisible="False" ReadOnly="True" SortExpression="Jobs_PK" /> </Columns> </asp:GridView> <br /> <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="Jobs_PK" DataSourceID="DataGrid2SqlDataSource" Height="50px" Width="125px"> <Fields> <asp:BoundField DataField="Jobs_PK" HeaderText="Jobs_PK" InsertVisible="False" ReadOnly="True" SortExpression="Jobs_PK" /> <asp:BoundField DataField="Position_Title" HeaderText="Position_Title" SortExpression="Position_Title" /> <asp:BoundField DataField="Educ_Level" HeaderText="Educ_Level" SortExpression="Educ_Level" /> <asp:BoundField DataField="Grade" HeaderText="Grade" SortExpression="Grade" /> <asp:BoundField DataField="JP_Description" HeaderText="JP_Description" SortExpression="JP_Description" /> <asp:BoundField DataField="Job_Status" HeaderText="Job_Status" SortExpression="Job_Status" /> <asp:BoundField DataField="Position_ID" HeaderText="Position_ID" SortExpression="Position_ID" /> </Fields> </asp:DetailsView> </div> </form> </body> error message: Cannot perform '=' operation on System.Int32 and System.String. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.EvaluateException: Cannot perform '=' operation on System.Int32 and System.String.

    Read the article

  • git pull not working

    - by dorelal
    I am not using github. We have git setup on our machine. I created a branch from master called experiment. However when I am trying to do git pull I am getting following message. > git pull You asked me to pull without telling me which branch you want to merge with, and 'branch.experiment.merge' in your configuration file does not tell me either. Please specify which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. Here is result of git remote show origin > git remote show origin * remote origin Fetch URL: ssh://git.domain.com/var/git/app.git Push URL: ssh://git.domain.com/var/git/app.git HEAD branch: master Remote branches: experiment tracked master tracked Local branches configured for 'git pull': master merges with remote master Local refs configured for 'git push': experiment pushes to experiment (local out of date) master pushes to master (up to date) As I read the message above experiment is mapped to origin/experiment. And my local repository knows that it is out of date. Then why I am not able to do git pull?

    Read the article

  • Git Workflow With Capistrano

    - by jerhinesmith
    I'm trying to get my head around a good git workflow using capistrano. I've found a few good articles, but I'm either not grasping completely what they're suggesting (likely) or they're somewhat lacking. Here's kind of what I had in mind so far, but I get caught up when to merge back into the master branch (i.e. before moving to stage? after?) and trying to hook it into capistrano for deployments: Make sure you’re up to date with all the changes made on the remote master branch by other developers git checkout master git pull Create a new branch that pertains to the particular bug you're trying to fix git checkout -b bug-fix-branch Make your changes git status git add . git commit -m "Friendly message about the commit" So, this is usually where I get stuck. At this point, I have a master branch that is healthy and a new bug-fix-branch that contains my (untested -- other than unit tests) changes. If I want to push my changes to stage (through cap staging deploy), do I have to merge my changes back into the master branch (I'd prefer not to since it seems like master should be kept free of untested code)? Do I even deploy from master (or should I be tagging a release first and then modifying my production.rb file to deploy from that tag)? git-deployment seems to address some of these workflow issues, but I can't seem to find out how on earth it actually hooks into cap staging deploy and cap production deploy. Thoughts? I assume there's a likely canonical way to do this, but I either can't find it or I'm too new to git to recognize that I have found it. Help!

    Read the article

  • Fixing merge conflicts?

    - by user291701
    I have two remote branches, "grape" and "master". I'm currently on "grape". Now I switch to "master": git checkout master Now I want to pull all changes from "grape" into "master" - is this the way to do it?: git merge origin grape It's my understanding that git will then pull all the current state of the remote branch "grape" into my local copy of "master". It will try to auto-merge for me. If there are conflicts, the files in conflict will have some conflict text actually injected into the file. I then have to go into those files, and delete the chunk I don't want (essentially telling git how to merge these files). For each file in conflict, do I add and commit the changes again?: git add problemfile1.txt git commit -m "Fixed merge conflict." git add problemfile2.txt git commit -m "Fixed another merge conflict." ... after I've fixed all the merge conflicts like above, do I just push to "master" again to finish up the process?: git push origin master or is there something else we need to do when we get into this conflict state? Thank you

    Read the article

  • WordPress SEO Plugins to make your Blog Search Engine Friendly

    - by Vaibhav
    WordPress is the most common blogging system in use today and its use as a CMS is also wide spread. With hundreds of millions of sites using wordpress, getting correct SEO for your WordPress based Blog or Site is very important. We get regular queries from people who want Search Engine Optimisation for their site or blog which is made using wordpress. Here is a list of 16 of the best WordPress Plug-ins That can help you achieve better rankings: All in one SEO Pack This is most popular plugin among all SEO plugins for WordPress. It is easy to use and is compatible with most of the WordPress plugins. It works as a complete package of SEO plugin – automatically generating META tags and optimizing search engines for your titles and avoiding duplicate content. You can also include META tags manually (Met title, Meta description and Met keywords) for all pages and post in your website. HeadSpace2 HeasSpace2 is available in different languages , you can manage a wide range of SEO Tasks related with meta data, you can tag your posts, Custom descriptions and titles. So your page can rank the created relevancy on Search engines and you can load different settings for different pages. Platinum SEO plugin Automatic 301 redirects permalink changes, META tags generation, avoids duplicate content, and does SEO optimization of post and page titles and a lots of other features. TGFI.net SEO WordPress Plugin It’s a modified version of all-in-one SEO Pack. It has some unique feature over All-in-one SEO plugin, It generate titles, meta descriptions and meta keywords automatically when overrides are not present. Google XML Sitemaps Sitemaps Generated by this tool are supported by  Google,  Yahoo,  Bing, and Ask. We all know Sitemaps make indexing of web pages easier for web crawlers. Crawlers can retrieve complete structure of site and more information by sitemaps. They notify all major search engines about new posts every time you create a new post. Sitemap Generator You can generate highly customizable sitemap for your WordPress page. You can choose what to show and what not to show, you can list the items in your choice of orde. It supports pages and permalinks and multi-level categories. SEO Slugs They can generate more search engine friendly URLs for your site. Slugs are filename assigned to your post , this plugin removes all  common words like ‘a’, ‘the’, ‘in’, ‘what’, ‘you’ from slug which are assigned automatically to your post. SEO Post Links This is a similar plugin to SEO Slug, it removes unnecessary keywords from slug to make it short and SEO friendly and you can fix the number of characters in your post. Automatic SEO links With this tool you can create auto linking in your post. You can use this tool for inter linking or external linking too. Just select your words, anchor text target URL nature of links ( Do fallow / No follow ). This plugin will replace the matches found in post, WP Backlinks A helpful plugin for link exchange , whenever any webmaster submits a link for link exchange, the plugin will spider webmasters site for reciprocal link, and if everything is found good , your link will be exchanged. SEO Title Tag You can optimize your Title  tags of  Word press blog through this plugin . You can also override the title tag with custom titles , mass editing and title tags for 404 pages which are the main feature of this plugin. 404 SEO plugin With this Plugin you can customize 404 page of your site; you can give customized error message and links to relevant pages of your site. Redirection A powerful plugins to manage 301 redirection and logs related with redirection, with this plugin you can track 404 errors and track the log of all redirected URLs , this plugin can redirect  post automatically when URL changes for that post. AddToAny This plugin helps your readers to share, save, email and bookmark your posts and pages. It supports more than a hundred social bookmarking , networking and sharing sites. SEO Friendly Images You can make SEO friendly images available on your site with the help of this tool. It updates images with proper titles and ALT tags. Robots Meta A plugin which prevents Search engines to index comments on your post, login and admin pages. It also allows to add tags for individual pages.

    Read the article

  • Command line scripts to restore the 4 system databases of MS SQL Server 2008

    - by ciscokid
    Hi there, can someone give me some advice on how to restore the 4 system databases (master, msdb, model, tempdb) of a sql server 2008 please? I've already done some testing myself (on restoring the master database) with the following commad line script as a result: ::set variables set dbname=master set dbdirectory=C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA title Restoring %dbname% database net stop mssqlserver cd C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Binn sqlservr -m sqlcmd -Slocalhost -E -Q "restore database master from disk='c:\master.bak' WITH REPLACE" net start mssqlserver pause After the execution of the 'sqlservr -m' command (used to start the server instance in single-user mode, which is only necessary when restoring the MASTER database), the script stops. So in order to execute the last 2 commands I need to separate the script into 2 smaller scripts, and run them one after the other. Does anyone has an idea on how I can merge them into one single script that runs completely without any interruption? I also want to restore the other 3 system databases using command line scripts like this one. Can someone please advice me how I need to go on? I've already noticed that restoring the temdb is not so easy, but there has to be a way... Looking forward to your advice!

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • How to "drag and drop" folders or multiple HTML files into a browser and have them open in multiple tabs

    - by PoorLuzer
    I save pages that I browse on the net and find interesting into a folder called C:\PageSaves Later, during the commute, I open these pages to see what they are and move them into a neatly categorized folder tree. For example, Perl related pages goto C:\Pages\Perl, MySQL related pages goto C:\Pages\MySQL and so on. I was wondering if there is any way I could open any number of HTML files on disc / inside a folder (C:\PageSaves in my case) into Mozilla/FF/K-Meleon etc For example, I would like to just "drag and drop" the folder C:\PageSaves into FireFox and have it open all the .html pages in the folder in a separate tab Right now, if I "drag and drop" multiple HTML files, it just opens the last file in the selection. Have a set of toolbar buttons, basically, a (the) plugin that should allow me to nuke the page (if I don't want to keep the page anymore) from disc or move the file (and its corresponding folder) into a predefined / new folder I am familiar with coding full blown FireFox plugins, so even if something very basic/almost similar exists, I can take it forward. Hints/clues/other methods of achieving the same result are all welcome!

    Read the article

  • Getting Run time 1004 error in code

    - by krishna123
    I tried the code provided by vba express for combining sheet, while execution it is displaying Run Time error 1004: Application Defined or Object Defined Error: My Scenario is: I have a Excel, in that I have first sheet "Connection" and after it I have Sheet1, Sheet2 and so on. I am combining all sheets except Sheet"Conection" by saying start with sheet2. I tried following line of code to exclude "Connection" sheet: If Not Sheet.Name = "Connection" then but it did not work. Whatever the sheets I have in some of them I have large data in some cells. Here is the code which I am using: I have highlighted the line Sub CopyFromWorksheets() Dim wrk As Workbook 'Workbook object - Always good to work with object variables Dim sht As Worksheet 'Object for handling worksheets in loop Dim trg As Worksheet 'Master Worksheet Dim rng As Range 'Range object Dim colCount As Integer 'Column count in tables in the worksheets Set wrk = ActiveWorkbook 'Working in active workbook For Each sht In wrk.Worksheets If sht.Name = "Master" Then sht.Delete Exit Sub End If Next sht 'We don't want screen updating Application.ScreenUpdating = False 'trg.SaveAs "C:\temp\CPReport1.xls" 'Add new worksheet as the last worksheet Set trg = wrk.Worksheets.Add(After:=wrk.Worksheets(wrk.Worksheets.Count)) 'Rename the new worksheet trg.Name = "Master" 'Get column headers from the first worksheet 'Column count first Set sht = wrk.Worksheets(2) colCount = sht.Cells(1, 255).End(xlToLeft).Column 'Now retrieve headers, no copy&paste needed With trg.Cells(1, 1).Resize(1, colCount) .Value = sht.Cells(1, 1).Resize(1, colCount).Value 'Set font as bold .Font.Bold = True End With trg.SaveAs "C:\temp\CPReport1.xls" 'We can start loop 'Skip Sheet - Connection If Not sht.Name = "Connection" Then For Each sht In wrk.Worksheets 'If worksheet in loop is the last one, stop execution (it is Master worksheet) If sht.Index = wrk.Worksheets.Count Then Exit For End If 'Data range in worksheet - starts from second row as first rows are the header rows in all worksheets Set rng = sht.Range(sht.Cells(2, 1), sht.Cells(65536, 1).End(xlUp).Resize(, colCount)) 'Put data into the Master worksheet '----------------- Error in below line -------------------------------------------------- trg.Cells(65536, 1).End(xlUp).Offset(1).Resize(rng.Rows.Count, rng.Columns.Count).Value = rng.Value '---------------------------------------------------------------------------------------- Next sht End If 'Fit the columns in Master worksheet trg.Columns.AutoFit 'Dim dest, destyfile 'dest = "E:\Test_Merge\" 'destyfile = dest & "_" & trg.Name 'trg.SaveAs (destyfile) 'Screen updating should be activated Application.ScreenUpdating = True End Sub

    Read the article

  • Site Security/Access management for asp.net mvc application

    - by minal
    I am trying to find a good pattern to use for user access validation. Basically on a webforms application I had a framework which used user roles to define access, ie, users were assigned into roles, and "pages" were granted access to a page. I had a table in the database with all the pages listed in it. Pages could have child pages that got their access inherited from the parent. When defining access, I assigned the roles access to the pages. Users in the role then had access to the pages. It is fairly simple to manage as well. The way I implemented this was on a base class that every page inherited. On pageload/init I would check the page url and validate access and act appropriately. However I am now working on a MVC application and need to implement something similar, however I can't find a good way to make my previous solution work. Purely because I don't have static pages as url paths. Also I am not sure how best to approach this as I now have controllers rather then aspx pages. I have looked at the MVCSitemapprovider, but that does not work off a database, it needs a sitemap file. I need control of changing user persmissions on the fly. Any thoughts/suggestions/pointers would be greatly appreciated.

    Read the article

  • entity framework navigation property further filter without loading into memory

    - by cellik
    Hi, I've two entities with 1 to N relation in between. Let's say Books and Pages. Book has a navigation property as Pages. Book has BookId as an identifier and Page has an auto generated id and a scalar property named PageNo. LazyLoading is set to true. I've generated this using VS2010 & .net 4.0 and created a database from that. In the partial class of Book, I need a GetPage function like below public Page GetPage(int PageNumber) { //checking whether it exist etc are not included for simplicity return Pages.Where(p=>p.PageNo==PageNumber).First(); } This works. However, since Pages property in the Book is an EntityCollection it has to load all Pages of a book in memory in order to get the one page (this slows down the app when this function is hit for the first time for a given book). i.e. Framework does not merge the queries and run them at once. It loads the Pages in memory and then uses LINQ to objects to do the second part To overcome this I've changed the code as follows public Page GetPage(int PageNumber) { MyContainer container = new MyContainer(); return container.Pages.Where(p=>p.PageNo==PageNumber && p.Book.BookId==BookId).First(); } This works considerably faster however it doesn't take into account the pages that have not been serialized to the db. So, both options has its cons. Is there any trick in the framework to overcome this situation. This must be a common scenario where you don't want all of the objects of a Navigation property loaded in memory when you don't need them.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • What kind of specific projects can I do to master bitwise operations in C++? Also is there a canonical book? [closed]

    - by Ford
    I don't use C++ or bitwise operations at my current job but I'm thinking of applying to companies where it is a requirement to be fluent with them (on their tests anyway). So my question is: Can anyone suggest a project which will require gaining a fluency in bitwise operations to complete? On a side note, is there a canonical book on optimization techniques using bitwise operations since that seems to be an important use of them?

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Reversing page navigation on PHP

    - by ilnur777
    Can anyone help me with reversing this PHP page navigation? Current script setting shows this format: [0] | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 ... 14 • Forward • End But I really need it to reverse for this format: [14] | 13 | 12 | 11| 10 | 9 | 8 | 7 | 6 ... 0 • Back • Start Here is the PHP code: <? $onpage = 10; // on page function page(){ if(empty($_GET["page"])){ $page = 0; } else { if(!is_numeric($_GET["page"])) die("Bad page number!"); $page = $_GET["page"]; } return $page; } function navigation($onpage, $page){ //---------------- $countt = 150; $cnt=$countt; // total amount of entries $rpp=$onpage; // total entries per page $rad=4; // amount of links to show near current page (2 left + 2 right + current page = total 5) $links=$rad*2+1; $pages=ceil($cnt/$rpp); if ($page>0) { echo "<a href=\"?page=0\"><<< Start</a> <font color='#CCCCCC'>•</font> <a href=\"?page=".($page-1)."\">< Back</a> <font color='#CCCCCC'>•</font>"; } $start=$page-$rad; if ($start>$pages-$links) { $start=$pages-$links; } if ($start<0) { $start=0; } $end=$start+$links; if ($end>$pages) { $end=$pages; } for ($i=$start; $i<$end; $i++) { echo " "; if ($i==$page) { echo "["; } else { echo "<a href=\"?page=$i\">"; } echo $i; if ($i==$page) { echo "]"; } else { echo "</a>"; } if ($i!=($end-1)) { echo " <font color='#CCCCCC'>|</font>"; } } if ($pages>$links&&$page<($pages-$rad-1)) { echo " ... <a href=\"?page=".($pages-1)."\">".($pages-1)."</a>"; } if ($page<$pages-1) { echo " <font color='#CCCCCC'>•</font> <a href=\"?page=".($page+1)."\">Forward ></a> <font color='#CCCCCC'>•</font> <a href=\"?page=".($pages-1)."\">End >>></a>"; } } $page = page(); // detect page $navigation = navigation($onpage, $page); // detect navigation ?>

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >