Search Results

Search found 7529 results on 302 pages for 'replace'.

Page 90/302 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • How ZFS handles online replacement in a RAID-Z (theoretical)

    - by Kevin
    This is a somewhat theoretical question about ZFS and RAID-Z. I'll use a three disk single-parity array as an example for clarity, but the problem can be extended to any number of disks and any parity. Suppose we have disks A, B, and C in the pool, and that it is clean. Suppose now that we physically add disk D with the intention of replacing disk C, and that disk C is still functioning correctly and is only being replaced out of preventive maintenance. Some admins might just yank C and install D, which is a little more organized as devices need not change IDs - however this does leave the array degraded temporarily and so for this example suppose we install D without offlining or removing C. Solaris docs indicate that we can replace a disk without first offlining it, using a command such as: zpool replace pool C D This should cause a resilvering onto D. Let us say that resilvering proceeds "downwards" along a "cursor." (I don't know the actual terminology used in the internal implementation.) Suppose now that midways through the resilvering, disk A fails. In theory, this should be recoverable, as above the cursor B and D contain sufficient parity and below the cursor B and C contain sufficient parity. However, whether or not this is actually recoverable depnds upon internal design decisions in ZFS which I am not aware of (and which the manual doesn't say in certain terms). If ZFS continues to send writes to C below the cursor, then we are fine. If, however, ZFS internally treats C as though it were gone, resilvering D only from parity between A and B and only writing A and B below the cursor, then we're toast. Some experimenting could answer this question but I was hoping maybe someone on here already knows which way ZFS handles this situation. Thank you in advance for any insight!

    Read the article

  • Move OS from RAID5 array to RAID 1 arrays

    - by Antoine
    I want to give a last boost to my old ProLiant ML350 G5 server which just needs to be reliable for a few more year only ! With a defined budget of about 1500$ (I do not have more), i plan to replace the CPU (+ adding a second one), the battery cache of my raid controller (E200i), double the RAM, and change all hard drives. I have 7 HDD (SAS 10krpm, 72Gb) + 1 spare in RAID5, and my system is all FULL (no empty tray, full disks). in my current RAID5 array, I have 2 partitions: - 1 OS partition, 20Gb - 1 data partition, 350 Gb I plan to replace these 8 disks with : - 2 x 300Gb SAS 15krpm in RAID 1 (= 1 partition for OS) - 2 x 2Tb SATA 7.2krpm in RAID 1 (= 1 partition for DATA) My biggest constraint is that I have only 01 day to upgrade my server. Therefore, I'm looking for cloning all my files (OS + data partition) to my new arrays, i.e : - the OS partition shall be cloned to the RAID1 "2x300Gb array" - the data partition shall be cloned to the RAID1 "2x2Tb array" My second problem is that I need to physically remove all the old hard drives before inserting the new ones. I'm running Windows Server 2003 R2, and even if MS support will expire soon, I cannot buy a new licence and spent time in configuration. Obviously, with 1500$, I cannot also buy a new server that I could start configuring from now ! Thought about ASR (NTBackup), but I have no floppy drive (and do not really want to invest in one !) Thought about a clonezilla clone, and read this interesting link : Windows Server 2003 - move C: partition to a new SAS disk , but i'm not so confident in using Clonezilla with RAID5. What should be the best option to quickly and easily (if possible!) "copy/paste" my OS (so no need to reinstall and reconfigure all) and DATA / programs / services, etc... ? Thanks for your comments

    Read the article

  • Is 2 GB of RAM better than 2.5 GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • How do I force a restore over an existing database?

    - by Ian Boyd
    I have a database, and i want to force a restore over top of it. I check the option: Overwrite the existing database (WITH REPLACE) But, as expected, SSMS is unable to overwrite the existing database. Of course i don't want different filenames; i want to overwrite the existing database. How do i force a restore over an existing database? And for Google search crawler: File '%s' is claimed by '%s'(4) and '%s"(3). The WITH MOVE clause can be used to relocate one or more files. RESTORE DATABASE is terminating abnormally. (Microsoft SQL Server, Error: 3176) Update The script (before i deleted the database, because i needed to get it done) was: RESTORE DATABASE [HealthCareGovManager] FILE = N'HealthCareGovManager_Data', FILE = N'HealthCareGovManager_Archive', FILE = N'HealthCareGovManager_AuditLog' FROM DISK = N'D:\STAGING\HealthCareGovManager10232013.bak' WITH FILE = 1, MOVE N'HealthCareGovManager_Data' TO N'D:\CGI Data\HealthCareGovManager.MDF', MOVE N'HealthCareGovManager_Archive' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_AuditLog' TO N'D:\CGI Data\HealthCareGovManager.ndf', MOVE N'HealthCareGovManager_Log' TO N'D:\CGI Data\HealthCareGovManager.LDF', NOUNLOAD, REPLACE, STATS = 10 I used the UI to delete the existing database, so that i could use the UI to force an overwrite of the (non)existing database. Hopefully there can be an answer so that the next guy can have an answer. No, nobody was in the context of the database (The error message from other connections is quite different from this error, and i only got to see this error after i killed the other connections).

    Read the article

  • Is 1GB + 1GB RAM better than 2GB +0.5GB?

    - by pibboater
    My laptop has two slots for RAM, and currently has two 512 MB chips, for 1 GB. Windows XP is running terribly slow on it, so I want to upgrade the RAM. I could buy two 1 GB chips to replace both of the current 512 MB chips, to give me 2 GB of RAM. Or, the price is the same to buy one 2 GB chip, to replace just one of the 512 MB chips, and give me 2.5 GB total. The RAM it takes is PC2-4200 533MHz DDR2. What do you think would be better: buying two 1 GB chips so it can take advantage of dual-channel operation, or buying one 2 GB chip to end up with more total RAM but not dual-channel operation? Like I said, price is the same, so performance is the only consideration. I'm not doing anything especially intensive like video or photo editing -- just having multiple Office programs open, playing music, browsers, etc., but currently even opening the first application takes forever. If it matters, the laptop is a Toshiba Qosmio G25-AV513 running Windows XP Media Center SP3. Thanks! Kevin

    Read the article

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • XSL 2.0 unparsed text and formatting

    - by Maha
    I want the unparsed text to be formatted for bold characters or increase font-size based on the tag the example here is for replace the searched word with bold characters Example: test <b> how to <b> when bold <b> when there is more <b> than one place to bold Can you please advice what is wrong here? <xsl:variable name="tcline" select="unparsed-text('generic_tc.txt','UTF-8')"/> <xsl:analyze-string select="$tcline" regex="\'<b>'(.*?)\'<b>'"> <xsl:matching-substring> <xsl:value-of select="replace($tcline,'\"<b>"(.*?)\"<b>"','<em>$1\/em/g;')"/> </xsl:matching-substring> <xsl:non-matching-substring> <xsl:value-of select="."/> </xsl:non-matching-substring> </xsl:analyze-string>

    Read the article

  • zfs pool error, how to determine which drive failed in the past

    - by Kendrick
    I had been copying data from my pool so that I could rebuild it with a different version so that I could go away from solaris 11 and to one that is portable between freebsd/openindia etc. it was copying at 20mb a sec the other day which is about all my desktop drive can handle writing from the network. suddently lastnight it went down to 1.4mb i ran zpool status today and got this. pool: store state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://www.sun.com/msg/ZFS-8000-9P scan: none requested config: NAME STATE READ WRITE CKSUM store ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c8t3d0p0 ONLINE 0 0 2 c8t4d0p0 ONLINE 0 0 10 c8t2d0p0 ONLINE 0 0 0 it is currently a 3 x1tb drive array. what tools would best be used to determine what the error was and which drive is failing. per the admin doc The second section of the configuration output displays error statistics. These errors are divided into three categories: READ – I/O errors occurred while issuing a read request. WRITE – I/O errors occurred while issuing a write request. CKSUM – Checksum errors. The device returned corrupted data as the result of a read request. it was saying low counts could be any thing from a power flux to a disk event but gave no suggestions as to what tools to check and determine with.

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Bad disks in ancient server

    - by Joel Coel
    I have a 1998-era Netware 3.12 server that runs everything on our campus: general ledger, purchasing, payroll, student information, grades, you name it. The server has an Adaptec RAID controller with two volumes: RAID 1, 2 17GB scsi disks, Seagate ST318417W RAID 5, 3 4GB scsi disks, 2 Seagate ST34573W and 1 ST34572W. We are currently in the early stages of a project to replace this system, but you don't just jump into a new system like that and so I need to keep this server running until at least November 2011. This week we had not one but two hard drives fail. Thankfully they are from different volumes and we're able to keep running for the moment, but given the close nature of these failures I have serious doubts that I'll be able to avoid catastrophic failure from this server through the November target as is without restoring the RAID redundancy — it'll only take one more drive failure anywhere and I'm completely hosed. We are fortunate enough to have exact match "spares" lying around for both drives, but the spares are in unknown condition. I tried swapping just them in, but the RAID controller isn't smart enough to handle this and it renders the system unbootable. As for the RAID controller itself, there is utility I can get into during POST via a Ctrl-A shortcut, but I can't do much useful from there. To actually manage volumes I must first boot in to Netware, at which point I can use CI/O Array Management Software Version 2.0 to actually look at volume information. I suspect that the normal way to manage things is to boot from a special floppy with the controller software on it, but that floppy is long gone. Going through the options in the RAID software, I think the only supported way to replace a disk in an existing RAID volume is to physically add the disk, boot up and configure it as a "spare" for a volume, force the volume to use the spare to replace an existing down disk (and at this point I'm only guessing) so that the down disk becomes the spare, repair the volume, remove the spare from the volume, and then shut down and remove the disk. Then start all over for the other failed disk. All this amounts to a lot of downtime, assuming I can even make it work and that my spares are any good. As for finding reliable spares, I have no clue where to even begin looking to find a new 4GB scsi drive, or even which exact scsi system I'm looking for, as it's gone through a few different iterations over time. Another option is to migrate this to a virtual machine (hyper-v), but all previous attempts we've made in this area have failed to get very far. When this machine was installed I was just graduating from high school, and so it requires lower level knowledge of netware and dos than I ever developed, or if I did have since forgotten (I'm not exactly a dos neophyte, either). Part of my problem is this is a high-use server, and taking it down for a few days to figure things out isn't gonna fly very well. As for the question, I'm looking for anything that might be helpful in this situation: a recommendation on a place to find good spares from this era, personal experience repairing RAID volumes using a similar controller or building a hyper-v vm from an old netware server, a line on a floppy with better software for the RAID controller, recommendation on a good Novell consultant in Nebraska that would be able to put things right, a whole other option I haven't considered yet, etc. Update: For backups, we have good (recently verified via restore) backups of the data only -- nothing for the software that actually runs things. Update 2: Just a progress report that I currently have a working Netware 3.12 install in VMWare Virtual Server 2.0, thanks largely to the guide I found here: http://cerbulescubogdan.blogspot.com/2010/11/novell-netware-312-on-vmware.html The next steps are preparing empty netware volumes to match the additional volumes on my existing server, taking a dump of everything on the C:\ drive and netware volumes on my existing server, and figuring out from that information what modules need added to netware, installing my licenses (we do still have that disk, if it's any good), and moving data over. I have approval to bring the server down for a week after the first of the year (sadly not before), so, aside from creating empty volumes, the rest of the work will have to wait until then. Final Update (Jan 5, 2011): I was able to get spares working in both raid arrays without data loss this week. Both are now listed by the controller as "FAULT TOLLERANT" (yay!). I was also able to build on the progress from my last update and now have a functional "spare" server in VMWare Server 2.0. The spare can run and use our erp software, but I can't put it into production because I can't (yet) print from that box (and I have no idea why). Even so, this VM will do in a pinch if I have no other choice, and between it and the repaired RAID arrays I'm comfortable pushing on until I can junk the machine in November.

    Read the article

  • jqgrid setting cutom formatter to dynamic column collection

    - by user312249
    I am using jqgrid. We are building a dashboard functionality with jquery. Different application just have to register respective application page and dashboard will render that page.To achieve this we are using jqgrid as one of the jquery plugin. Following is my codeenter code here var ph = '#' + placeHolder; var _prevSort; $.ajax({ url: dataUrl, dataType: "json", async: true, success: function(json) { pager = $('#' + pager); if (json.showPager === "false") { pager = eval(json.showPager); } dataUrl += "&jqSession=true"; $(ph).jqGrid({ url: dataUrl, datatype: "json", sortclass: "grid_sort", colNames: JSON.parse(json.colNames), colModel: JSON.parse(json.colModel), forceFit: true, rowNum: json.rowNum, rowList: JSON.parse(json.rowList), pager: pager, sortname: json.sortName, caption: json.caption, viewrecords: true, viewsortcols: true, sortorder: json.sortOrder, footerrow: summaryFooter, userDataOnFooter: summaryFooter, jsonReader: { root: "rows", row: "row", repeatitems: false, id: json.sortName }, gridComplete: function() { if (showFooter) { $(ph).append("" + json.footerRow + ""); } if (json.additionalContent != null) { $("#" + xContID).html(json.additionalContent); } $("ui-icon-asc").append("IMG"); var _rows = $(".jqgrow"); if (json.rows.length 0) { for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } var gMaxHeight = getGridMaxHeight(); var gHeight = ($(ph + " tr").length + 1) * ($($(".jqgrow") [0]).height()); if (gHeight <= gMaxHeight) { $(ph).parent().height(gHeight); } else { $(ph).parent().height(gMaxHeight); } } else { $(ph).prepend("" + gridNoDataMsg + ""); $(ph).parent().height(60); } }, onSortCol: function(index, iCol, sortorder) { dataUrl = dataUrl.replace("&jqSession=true", ""); $(ph).jqGrid().setGridParam({ url: dataUrl }).trigger("reloadGrid"); var colName = "#jqgh" + index; // $(_prevSort).parent().removeClass("ui-jqgrid-sorted"); // $(_prevSort).parent().addClass("ui-state-default"); // $(_colName).parent().addClass("ui-jqgrid-sorted"); // $(_colName).parent().removeClass("ui-state-default"); _prevSort = _colName; var _rows = $(".jqgrow"); for (var i = 1; i < _rows.length; i += 1) { _rows[i].attributes["class"].value = _rows[i].attributes["class"].value.replace(" ui-jqgrid-altrow", ""); if (i % 2 == 1) { _rows[i].attributes["class"].value += " ui-jqgrid-altrow"; } } } }).navGrid('#' + pager, { search: false, sort: false, edit: false, add: false, del: false, refresh: false }); // end of grid $("#" + loadid).empty(); gGridIds[gGridIds.length] = placeHolder; SetGridSizes(); }, error: function() { $("#" + loadid).html(loadingErr); } }); As you can see from the code i am getting column collection dynamically(Appication page which i am calling will give me JSON in the response and will have colNames collection in it. Evrything is working fine but, only issue is coming when we are trying to apply custom formatter to column. This issue comes only when we are dynamically assign "colModel" to jqgrid. Appreciate help Thanks in advance

    Read the article

  • SMO ConnectionContext.StatementTimeout setting is ignored

    - by Woody
    I am successfully using Powershell with SMO to backup most databases. However, I have several large databases in which I receive a "timeout" error "System.Data.SqlClient.SqlException: Timeout expired". The timout consistently occurs at 10 minutes. I have tried setting ConnectionContext.StatementTimeout to 0, 6000, and to [System.Int32]::MaxValue. The setting made no difference. I have found a number of Google references which indicate setting it to 0 makes it unlimited. No matter what I try, the timeouts consistently occur at 10 minutes. I even set Remote Query Timeout on the server to 0 (via Studio Manager) to no avail. Below is my SMO connection where I set the time out and the actual backup function. Further below is the output from my script. UPDATE Interestingly enough, I wrote the backup function in C# using VS 2008 and the timeout override does work within that environment. I am in the process of incorporating that C# process into my Powershell Script until I can find out why the timeout override does not work with just Powershell. This is extremely annoying! function New-SMOconnection { Param ($server, $ApplicationName= "PowerShell SMO", [int]$StatementTimeout = 0 ) # Write-Debug "Function: New-SMOconnection $server $connectionname $commandtimeout" if (test-path variable:\conn) { $conn.connectioncontext.disconnect() } else { $conn = New-Object('Microsoft.SqlServer.Management.Smo.Server') $server } $conn.connectioncontext.applicationName = $applicationName $conn.ConnectionContext.StatementTimeout = $StatementTimeout $conn.connectioncontext.Connect() $conn } $smo = New-SMOConnection -server $server if ($smo.connectioncontext.isopen -eq $false) { Throw "Could not connect to server $($server)." } Function Backup-Database { Param([string]$dbname) $db = $smo.Databases.get_Item($dbname) if (!$db) {"Database $dbname was not found"; Return} $sqldir = $smo.Settings.BackupDirectory + "\$($smo.name -replace ("\\", "$"))" $s = ($server.Split('\'))[0] $basedir = "\\$s\" + $($sqldir -replace (":", "$")) $dt = get-date -format yyyyMMdd-HHmmss $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') $dbbk.Action = 'Database' $dbbk.BackupSetDescription = "Full backup of " + $dbname $dbbk.BackupSetName = $dbname + " Backup" $dbbk.Database = $dbname $dbbk.MediaDescription = "Disk" $target = "$basedir\$dbname\FULL" if (-not(Test-Path $target)) { New-Item $target -ItemType directory | Out-Null} $device = "$sqldir\$dbname\FULL\" + $($server -replace("\\", "$")) + "_" + $dbname + "_FULL_" + $dt + ".bak" $dbbk.Devices.AddDevice($device, 'File') $dbbk.Initialize = $True $dbbk.Incremental = $false $dbbk.LogTruncation = [Microsoft.SqlServer.Management.Smo.BackupTruncateLogType]::Truncate If (!$copyonly) { If ($kill) {$smo.KillAllProcesses($dbname)} $dbbk.SqlBackupAsync($server) } $dbbk } Started SQL backups for server LCFSQLxxx\SQLxxx at 05/06/2010 15:33:16 Statement TimeOut value set to 0. DatabaseName : OperationsManagerDW StartBackupTime : 5/6/2010 3:33:16 PM EndBackupTime : 5/6/2010 3:43:17 PM StartCopyTime : 1/1/0001 12:00:00 AM EndCopyTime : 1/1/0001 12:00:00 AM CopiedFiles : Status : Failed ErrorMessage : System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The backup or restore was aborted. 10 percent processed. 20 percent processed. 30 percent processed. 40 percent processed. 50 percent processed. 60 percent processed. 70 percent processed. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Ended backups at 05/06/2010 15:43:23

    Read the article

  • Problem with XML Deserialization C#

    - by alex
    I am having trouble with XML deserialization. In a nutshell - I have 2 classes: SMSMessage SMSSendingResponse I call an API that takes a bunch of parameters (represented by SMSMessage class) It returns an XML response. The response looks like this: <?xml version="1.0" encoding="utf-8"?> <data> <status>1</status> <message>OK</message> <results> <result> <account>12345</account> <to>012345678</to> <from>054321</from> <message>Testing</message> <flash></flash> <replace></replace> <report></report> <concat></concat> <id>f8d3eea1cbf6771a4bb02af3fb15253e</id> </result> </results> </data> Here is the SMSMessage class (with the xml serialization attributes so far) using System.Xml.Serialization; namespace XMLSerializationHelp { [XmlRoot("results")] public class SMSMessage { public string To { get { return Result.To; } } public string From { get { return Result.From; } } public string Message { get { return Result.Message; } } [XmlElement("result")] public Result Result { get; set; } } } Here is SMSMessageSendingResponse: using System.Xml.Serialization; namespace XMLSerializationHelp { [XmlRoot("data")] public class SMSSendingResponse { //should come from the results/result/account element. in our example "12345" public string AccountNumber { get { return SMSMessage.Result.AccountNumber; } } //should come from the "status" xml element [XmlElement("status")] public string Status { get; set; } //should come from the "message" xml element (in our example - "OK") [XmlElement("message")] public string Message { get; set; } //should come from the "id" xml element (in our example - "f8d3eea1cbf6771a4bb02af3fb15253e") public string ResponseID { get { return SMSMessage.Result.ResponseID; } } //should be created from the results/result element - ignore flash, replace, report and concat elements for now. [XmlElement("results")] public SMSMessage SMSMessage { get; set; } } } Here is the other class (Result) - I want to get rid of this, so only the 2 previously mentioned classes remain using System.Xml.Serialization; namespace XMLSerializationHelp { [XmlRoot("result")] public class Result { [XmlElement("account")] public string AccountNumber{ get; set; } [XmlElement("to")] public string To { get; set; } [XmlElement("from")] public string From { get; set; } [XmlElement("message")] public string Message { get; set; } [XmlElement("id")] public string ResponseID { get; set; } } } I don't want SMSMessage to be aware of the SMSSendingResponse - as this will be handled by a different part of my application

    Read the article

  • DataTable to JSON

    - by Joel Coehoorn
    I recently needed to serialize a datatable to JSON. Where I'm at we're still on .Net 2.0, so I can't use the JSON serializer in .Net 3.5. I figured this must have been done before, so I went looking online and found a number of different options. Some of them depend on an additional library, which I would have a hard time pushing through here. Others require first converting to List<Dictionary<>>, which seemed a little awkward and needless. Another treated all values like a string. For one reason or another I couldn't really get behind any of them, so I decided to roll my own, which is posted below. As you can see from reading the //TODO comments, it's incomplete in a few places. This code is already in production here, so it does "work" in the basic sense. The places where it's incomplete are places where we know our production data won't currently hit it (no timespans or byte arrays in the db). The reason I'm posting here is that I feel like this can be a little better, and I'd like help finishing and improving this code. Any input welcome. public static class JSONHelper { public static string FromDataTable(DataTable dt) { string rowDelimiter = ""; StringBuilder result = new StringBuilder("["); foreach (DataRow row in dt.Rows) { result.Append(rowDelimiter); result.Append(FromDataRow(row)); rowDelimiter = ","; } result.Append("]"); return result.ToString(); } public static string FromDataRow(DataRow row) { DataColumnCollection cols = row.Table.Columns; string colDelimiter = ""; StringBuilder result = new StringBuilder("{"); for (int i = 0; i < cols.Count; i++) { // use index rather than foreach, so we can use the index for both the row and cols collection result.Append(colDelimiter).Append("\"") .Append(cols[i].ColumnName).Append("\":") .Append(JSONValueFromDataRowObject(row[i], cols[i].DataType)); colDelimiter = ","; } result.Append("}"); return result.ToString(); } // possible types: // http://msdn.microsoft.com/en-us/library/system.data.datacolumn.datatype(VS.80).aspx private static Type[] numeric = new Type[] {typeof(byte), typeof(decimal), typeof(double), typeof(Int16), typeof(Int32), typeof(SByte), typeof(Single), typeof(UInt16), typeof(UInt32), typeof(UInt64)}; // I don't want to rebuild this value for every date cell in the table private static long EpochTicks = new DateTime(1970, 1, 1).Ticks; private static string JSONValueFromDataRowObject(object value, Type DataType) { // null if (value == DBNull.Value) return "null"; // numeric if (Array.IndexOf(numeric, DataType) > -1) return value.ToString(); // TODO: eventually want to use a stricter format // boolean if (DataType == typeof(bool)) return ((bool)value) ? "true" : "false"; // date -- see http://weblogs.asp.net/bleroy/archive/2008/01/18/dates-and-json.aspx if (DataType == typeof(DateTime)) return "\"\\/Date(" + new TimeSpan(((DateTime)value).ToUniversalTime().Ticks - EpochTicks).TotalMilliseconds.ToString() + ")\\/\""; // TODO: add Timespan support // TODO: add Byte[] support //TODO: this would be _much_ faster with a state machine // string/char return "\"" + value.ToString().Replace(@"\", @"\\").Replace(Environment.NewLine, @"\n").Replace("\"", @"\""") + "\""; } }

    Read the article

  • Sending two arrays using ajax post request

    - by sachin taware
    I am working on a filter functionality using ajax/jquery and php/mysql.I have two sets of check boxes 1)=for Regions 2)=for Localities.The filter is similar to the one here http://www.proptiger.com/property-in-pune-real-estate.php I want to send the values of both the check boxes to filter the records.The filter for localities will be locally filtered on the selection of a region check box.I have got it to work upto some extent This is called on the first set of check boxes. Html <div class="locality"> <input type="checkbox" id="checkbox1" class="checkbox1" value="<?php echo $suburb['suburb_name']?>" name="Suburb_check[]" onClick="changeResults();" onChange="" ><?php echo $suburb['suburb_name']?> <span class="grey">(<?php echo $suburb['total']?>)</span> </div> <?php }?> Javascript/Jquery function changeResults(){ var data = { 'venue[]' : []}; $("input:checked").each(function() { var chck1 = $(this).val(); //alert(chck1); data['venue[]'].push($(this).val()); }); $.ajax({ type : 'POST', url : 'process.php', data : data, success : function(data){ $('#project_section').html(data); // replace the contents coming from php file } }); $.ajax({ type : 'POST', url : 'loadLocality.php', data : data, success : function(data){ document.getElementById("searchLoader").style.display = 'block'; $('#localityList').html(data); // replace the contents coming from php file // alert(data); document.getElementById("searchLoader").style.display = 'none'; } }); } This is the second set of chck boxes with Localities <div class="locality" id="localities"> <input type="checkbox" onClick="changeLocality();" id="1" value="<?php echo $locality['locality_name'];?>" name="Locality_check[]"><?php echo $locality['locality_name'];?> <span class="grey">(<?php echo $locality['total'];?>)</span> </div> I have called a function similar to the above one and posted it to a different page. Here is the second chck box function: function changeLocality(){ var dataLocality = {'locality[]' : []}; $("input:checked").each(function() { var chcklocal = $(this).val(); //alert(chcklocal); dataLocality['locality[]'].push($(this).val()); }); $.ajax({ type : 'POST', url : 'processLocality.php', data : dataLocality, success : function(dataLocality){ // document.getElementById("newloader").style.display ="block"; $('#project_section').html(dataLocality); // replace the contents coming from php file //alert('data'); // document.getElementById("newloader").style.display ="none"; } }); } But,when I select a region box and then a locality box and then deselect the region,I also get the previous locality value in the region array(name of the array is venue)I want only regions to go in the venue array and regions+localities in the locality array.Actually,if I deselect the region subsequent locality value should also be removed from the array.Also,eventhough I am posting them to different pages the region page holds the locality values.I am stuck as not much of JQUERY knowledge. Went through posts,but was not able to fix it.Any help would be appreciated. EDIT I get an array when I check the first set of chck boxes,and also filter the records using above functions. Array ( [venue] => Array ( [0] => Pune East [1] => Pune West [2] => Pune North [3] => Pune South ) )

    Read the article

  • Jquery, XML and Google Map

    - by EXPennD
    Hi, I'm integrating a Google Map in my website that user could add some thumbnails and details of their own house. Here's a code preview of what I want to happen. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> Jquery and Google Map // var locations = {}; function load() { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(47.614495, -122.341861), 13); GDownloadUrl("markerdata.xml", function(data) { var xml = GXml.parse(data); var markers = xml.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { var name = markers[i].getAttribute("name"); var address = markers[i].getAttribute("address"); var type = markers[i].getAttribute("type"); var latlng = new GLatLng(parseFloat(markers[i].getAttribute("lat")), parseFloat(markers[i].getAttribute("lng"))); var store = {latlng: latlng, name: name, address: address, type: type}; var latlngHash = (latlng.lat().toFixed(6) + "" + latlng.lng().toFixed(6)); latlngHash = latlngHash.replace(".","").replace(".", "").replace("-",""); if (locations[latlngHash] == null) { locations[latlngHash] = [] } locations[latlngHash].push(store); } for (var latlngHash in locations) { var stores = locations[latlngHash]; if (stores.length > 1) { map.addOverlay(createClusteredMarker(stores)); } else { map.addOverlay(createMarker(stores)); } } }); } function createMarker(stores) { var store = stores[0]; var newIcon = MapIconMaker.createMarkerIcon({width: 32, height: 32, primaryColor: "#00ff00"}); var marker = new GMarker(store.latlng, {icon: newIcon}); var html = "<b>" + store.name + "</b> <br/>" + store.address; GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml(html); }); return marker; } function createClusteredMarker(stores) { var newIcon = MapIconMaker.createMarkerIcon({width: 44, height: 44, primaryColor: "#00ff00"}); var marker = new GMarker(stores[0].latlng, {icon: newIcon}); var html = ""; for (var i = 0; i < stores.length; i++) { html += "<b>" + stores[i].name + "</b> <br/>" + stores[i].address + "<br/>"; } GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml(html); }); return marker; } //]]> description I want this feature to be fully interactive. If possible user can drag and drop a marker to the location on the Google map and the description field would be enabled after adding the marker so user could add details and submit it. Also here's my current situation. The reason why I want it to be done in XML is the Content Management System that I currently use for this project don't allow me to add Database and Php scripts. The only thing that I have access is I could add new HTML on the BODY section and also External Javascript on the HEAD section. Sorry about the way I write it, it sounds like demanding. Its because I'm still learning Jquery. Thanks everyone!

    Read the article

  • problem with google chrome

    - by user365559
    hi. i have javscript file for history management.IT is not supported by chrome when i am trying to navigate to back page with backbutton in the browser.I can see the url change but it doesnt go to preceeding page. BrowserHistoryUtils = { addEvent: function(elm, evType, fn, useCapture) { useCapture = useCapture || false; if (elm.addEventListener) { elm.addEventListener(evType, fn, useCapture); return true; } else if (elm.attachEvent) { var r = elm.attachEvent('on' + evType, fn); return r; } else { elm['on' + evType] = fn; } } } BrowserHistory = (function() { // type of browser var browser = { ie: false, firefox: false, safari: false, opera: false, version: -1 }; // if setDefaultURL has been called, our first clue // that the SWF is ready and listening //var swfReady = false; // the URL we'll send to the SWF once it is ready //var pendingURL = ''; // Default app state URL to use when no fragment ID present var defaultHash = ''; // Last-known app state URL var currentHref = document.location.href; // Initial URL (used only by IE) var initialHref = document.location.href; // Initial URL (used only by IE) var initialHash = document.location.hash; // History frame source URL prefix (used only by IE) var historyFrameSourcePrefix = 'history/historyFrame.html?'; // History maintenance (used only by Safari) var currentHistoryLength = -1; var historyHash = []; var initialState = createState(initialHref, initialHref + '#' + initialHash, initialHash); var backStack = []; var forwardStack = []; var currentObjectId = null; //UserAgent detection var useragent = navigator.userAgent.toLowerCase(); if (useragent.indexOf("opera") != -1) { browser.opera = true; } else if (useragent.indexOf("msie") != -1) { browser.ie = true; browser.version = parseFloat(useragent.substring(useragent.indexOf('msie') + 4)); } else if (useragent.indexOf("safari") != -1) { browser.safari = true; browser.version = parseFloat(useragent.substring(useragent.indexOf('safari') + 7)); } else if (useragent.indexOf("gecko") != -1) { browser.firefox = true; } if (browser.ie == true && browser.version == 7) { window["_ie_firstload"] = false; } // Accessor functions for obtaining specific elements of the page. function getHistoryFrame() { return document.getElementById('ie_historyFrame'); } function getAnchorElement() { return document.getElementById('firefox_anchorDiv'); } function getFormElement() { return document.getElementById('safari_formDiv'); } function getRememberElement() { return document.getElementById("safari_remember_field"); } // Get the Flash player object for performing ExternalInterface callbacks. // Updated for changes to SWFObject2. function getPlayer(id) { if (id && document.getElementById(id)) { var r = document.getElementById(id); if (typeof r.SetVariable != "undefined") { return r; } else { var o = r.getElementsByTagName("object"); var e = r.getElementsByTagName("embed"); if (o.length > 0 && typeof o[0].SetVariable != "undefined") { return o[0]; } else if (e.length > 0 && typeof e[0].SetVariable != "undefined") { return e[0]; } } } else { var o = document.getElementsByTagName("object"); var e = document.getElementsByTagName("embed"); if (e.length > 0 && typeof e[0].SetVariable != "undefined") { return e[0]; } else if (o.length > 0 && typeof o[0].SetVariable != "undefined") { return o[0]; } else if (o.length > 1 && typeof o[1].SetVariable != "undefined") { return o[1]; } } return undefined; } function getPlayers() { var players = []; if (players.length == 0) { var tmp = document.getElementsByTagName('object'); players = tmp; } if (players.length == 0 || players[0].object == null) { var tmp = document.getElementsByTagName('embed'); players = tmp; } return players; } function getIframeHash() { var doc = getHistoryFrame().contentWindow.document; var hash = String(doc.location.search); if (hash.length == 1 && hash.charAt(0) == "?") { hash = ""; } else if (hash.length >= 2 && hash.charAt(0) == "?") { hash = hash.substring(1); } return hash; } /* Get the current location hash excluding the '#' symbol. */ function getHash() { // It would be nice if we could use document.location.hash here, // but it's faulty sometimes. var idx = document.location.href.indexOf('#'); return (idx >= 0) ? document.location.href.substr(idx+1) : ''; } /* Get the current location hash excluding the '#' symbol. */ function setHash(hash) { // It would be nice if we could use document.location.hash here, // but it's faulty sometimes. if (hash == '') hash = '#' document.location.hash = hash; } function createState(baseUrl, newUrl, flexAppUrl) { return { 'baseUrl': baseUrl, 'newUrl': newUrl, 'flexAppUrl': flexAppUrl, 'title': null }; } /* Add a history entry to the browser. * baseUrl: the portion of the location prior to the '#' * newUrl: the entire new URL, including '#' and following fragment * flexAppUrl: the portion of the location following the '#' only */ function addHistoryEntry(baseUrl, newUrl, flexAppUrl) { //delete all the history entries forwardStack = []; if (browser.ie) { //Check to see if we are being asked to do a navigate for the first //history entry, and if so ignore, because it's coming from the creation //of the history iframe if (flexAppUrl == defaultHash && document.location.href == initialHref && window['_ie_firstload']) { currentHref = initialHref; return; } if ((!flexAppUrl || flexAppUrl == defaultHash) && window['_ie_firstload']) { newUrl = baseUrl + '#' + defaultHash; flexAppUrl = defaultHash; } else { // for IE, tell the history frame to go somewhere without a '#' // in order to get this entry into the browser history. getHistoryFrame().src = historyFrameSourcePrefix + flexAppUrl; } setHash(flexAppUrl); } else { //ADR if (backStack.length == 0 && initialState.flexAppUrl == flexAppUrl) { initialState = createState(baseUrl, newUrl, flexAppUrl); } else if(backStack.length > 0 && backStack[backStack.length - 1].flexAppUrl == flexAppUrl) { backStack[backStack.length - 1] = createState(baseUrl, newUrl, flexAppUrl); } if (browser.safari) { // for Safari, submit a form whose action points to the desired URL if (browser.version <= 419.3) { var file = window.location.pathname.toString(); file = file.substring(file.lastIndexOf("/")+1); getFormElement().innerHTML = '<form name="historyForm" action="'+file+'#' + flexAppUrl + '" method="GET"></form>'; //get the current elements and add them to the form var qs = window.location.search.substring(1); var qs_arr = qs.split("&"); for (var i = 0; i < qs_arr.length; i++) { var tmp = qs_arr[i].split("="); var elem = document.createElement("input"); elem.type = "hidden"; elem.name = tmp[0]; elem.value = tmp[1]; document.forms.historyForm.appendChild(elem); } document.forms.historyForm.submit(); } else { top.location.hash = flexAppUrl; } // We also have to maintain the history by hand for Safari historyHash[history.length] = flexAppUrl; _storeStates(); } else { // Otherwise, write an anchor into the page and tell the browser to go there addAnchor(flexAppUrl); setHash(flexAppUrl); } } backStack.push(createState(baseUrl, newUrl, flexAppUrl)); } function _storeStates() { if (browser.safari) { getRememberElement().value = historyHash.join(","); } } function handleBackButton() { //The "current" page is always at the top of the history stack. var current = backStack.pop(); if (!current) { return; } var last = backStack[backStack.length - 1]; if (!last && backStack.length == 0){ last = initialState; } forwardStack.push(current); } function handleForwardButton() { //summary: private method. Do not call this directly. var last = forwardStack.pop(); if (!last) { return; } backStack.push(last); } function handleArbitraryUrl() { //delete all the history entries forwardStack = []; } /* Called periodically to poll to see if we need to detect navigation that has occurred */ function checkForUrlChange() { if (browser.ie) { if (currentHref != document.location.href && currentHref + '#' != document.location.href) { //This occurs when the user has navigated to a specific URL //within the app, and didn't use browser back/forward //IE seems to have a bug where it stops updating the URL it //shows the end-user at this point, but programatically it //appears to be correct. Do a full app reload to get around //this issue. if (browser.version < 7) { currentHref = document.location.href; document.location.reload(); } else { if (getHash() != getIframeHash()) { // this.iframe.src = this.blankURL + hash; var sourceToSet = historyFrameSourcePrefix + getHash(); getHistoryFrame().src = sourceToSet; } } } } if (browser.safari) { // For Safari, we have to check to see if history.length changed. if (currentHistoryLength >= 0 && history.length != currentHistoryLength) { //alert("did change: " + history.length + ", " + historyHash.length + "|" + historyHash[history.length] + "|>" + historyHash.join("|")); // If it did change, then we have to look the old state up // in our hand-maintained array since document.location.hash // won't have changed, then call back into BrowserManager. currentHistoryLength = history.length; var flexAppUrl = historyHash[currentHistoryLength]; if (flexAppUrl == '') { //flexAppUrl = defaultHash; } //ADR: to fix multiple if (typeof BrowserHistory_multiple != "undefined" && BrowserHistory_multiple == true) { var pl = getPlayers(); for (var i = 0; i < pl.length; i++) { pl[i].browserURLChange(flexAppUrl); } } else { getPlayer().browserURLChange(flexAppUrl); } _storeStates(); } } if (browser.firefox) { if (currentHref != document.location.href) { var bsl = backStack.length; var urlActions = { back: false, forward: false, set: false } if ((window.location.hash == initialHash || window.location.href == initialHref) && (bsl == 1)) { urlActions.back = true; // FIXME: could this ever be a forward button? // we can't clear it because we still need to check for forwards. Ugg. // clearInterval(this.locationTimer); handleBackButton(); } // first check to see if we could have gone forward. We always halt on // a no-hash item. if (forwardStack.length > 0) { if (forwardStack[forwardStack.length-1].flexAppUrl == getHash()) { urlActions.forward = true; handleForwardButton(); } } // ok, that didn't work, try someplace back in the history stack if ((bsl >= 2) && (backStack[bsl - 2])) { if (backStack[bsl - 2].flexAppUrl == getHash()) { urlActions.back = true; handleBackButton(); } } if (!urlActions.back && !urlActions.forward) { var foundInStacks = { back: -1, forward: -1 } for (var i = 0; i < backStack.length; i++) { if (backStack[i].flexAppUrl == getHash() && i != (bsl - 2)) { arbitraryUrl = true; foundInStacks.back = i; } } for (var i = 0; i < forwardStack.length; i++) { if (forwardStack[i].flexAppUrl == getHash() && i != (bsl - 2)) { arbitraryUrl = true; foundInStacks.forward = i; } } handleArbitraryUrl(); } // Firefox changed; do a callback into BrowserManager to tell it. currentHref = document.location.href; var flexAppUrl = getHash(); if (flexAppUrl == '') { //flexAppUrl = defaultHash; } //ADR: to fix multiple if (typeof BrowserHistory_multiple != "undefined" && BrowserHistory_multiple == true) { var pl = getPlayers(); for (var i = 0; i < pl.length; i++) { pl[i].browserURLChange(flexAppUrl); } } else { getPlayer().browserURLChange(flexAppUrl); } } } //setTimeout(checkForUrlChange, 50); } /* Write an anchor into the page to legitimize it as a URL for Firefox et al. */ function addAnchor(flexAppUrl) { if (document.getElementsByName(flexAppUrl).length == 0) { getAnchorElement().innerHTML += "<a name='" + flexAppUrl + "'>" + flexAppUrl + "</a>"; } } var _initialize = function () { if (browser.ie) { var scripts = document.getElementsByTagName('script'); for (var i = 0, s; s = scripts[i]; i++) { if (s.src.indexOf("history.js") > -1) { var iframe_location = (new String(s.src)).replace("history.js", "historyFrame.html"); } } historyFrameSourcePrefix = iframe_location + "?"; var src = historyFrameSourcePrefix; var iframe = document.createElement("iframe"); iframe.id = 'ie_historyFrame'; iframe.name = 'ie_historyFrame'; //iframe.src = historyFrameSourcePrefix; try { document.body.appendChild(iframe); } catch(e) { setTimeout(function() { document.body.appendChild(iframe); }, 0); } } if (browser.safari) { var rememberDiv = document.createElement("div"); rememberDiv.id = 'safari_rememberDiv'; document.body.appendChild(rememberDiv); rememberDiv.innerHTML = '<input type="text" id="safari_remember_field" style="width: 500px;">'; var formDiv = document.createElement("div"); formDiv.id = 'safari_formDiv'; document.body.appendChild(formDiv); var reloader_content = document.createElement('div'); reloader_content.id = 'safarireloader'; var scripts = document.getElementsByTagName('script'); for (var i = 0, s; s = scripts[i]; i++) { if (s.src.indexOf("history.js") > -1) { html = (new String(s.src)).replace(".js", ".html"); } } reloader_content.innerHTML = '<iframe id="safarireloader-iframe" src="about:blank" frameborder="no" scrolling="no"></iframe>'; document.body.appendChild(reloader_content); reloader_content.style.position = 'absolute'; reloader_content.style.left = reloader_content.style.top = '-9999px'; iframe = reloader_content.getElementsByTagName('iframe')[0]; if (document.getElementById("safari_remember_field").value != "" ) { historyHash = document.getElementById("safari_remember_field").value.split(","); } } if (browser.firefox) { var anchorDiv = document.createElement("div"); anchorDiv.id = 'firefox_anchorDiv'; document.body.appendChild(anchorDiv); } //setTimeout(checkForUrlChange, 50); } return { historyHash: historyHash, backStack: function() { return backStack; }, forwardStack: function() { return forwardStack }, getPlayer: getPlayer, initialize: function(src) { _initialize(src); }, setURL: function(url) { document.location.href = url; }, getURL: function() { return document.location.href; }, getTitle: function() { return document.title; }, setTitle: function(title) { try { backStack[backStack.length - 1].title = title; } catch(e) { } //if on safari, set the title to be the empty string. if (browser.safari) { if (title == "") { try { var tmp = window.location.href.toString(); title = tmp.substring((tmp.lastIndexOf("/")+1), tmp.lastIndexOf("#")); } catch(e) { title = ""; } } } document.title = title; }, setDefaultURL: function(def) { defaultHash = def; def = getHash(); //trailing ? is important else an extra frame gets added to the history //when navigating back to the first page. Alternatively could check //in history frame navigation to compare # and ?. if (browser.ie) { window['_ie_firstload'] = true; var sourceToSet = historyFrameSourcePrefix + def; var func = function() { getHistoryFrame().src = sourceToSet; window.location.replace("#" + def); setInterval(checkForUrlChange, 50); } try { func(); } catch(e) { window.setTimeout(function() { func(); }, 0); } } if (browser.safari) { currentHistoryLength = history.length; if (historyHash.length == 0) { historyHash[currentHistoryLength] = def; var newloc = "#" + def; window.location.replace(newloc); } else { //alert(historyHash[historyHash.length-1]); } //setHash(def); setInterval(checkForUrlChange, 50); } if (browser.firefox || browser.opera) { var reg = new RegExp("#" + def + "$"); if (window.location.toString().match(reg)) { } else { var newloc ="#" + def; window.location.replace(newloc); } setInterval(checkForUrlChange, 50); //setHash(def); } }, /* Set the current browser URL; called from inside BrowserManager to propagate * the application state out to the container. */ setBrowserURL: function(flexAppUrl, objectId) { if (browser.ie && typeof objectId != "undefined") { currentObjectId = objectId; } //fromIframe = fromIframe || false; //fromFlex = fromFlex || false; //alert("setBrowserURL: " + flexAppUrl); //flexAppUrl = (flexAppUrl == "") ? defaultHash : flexAppUrl ; var pos = document.location.href.indexOf('#'); var baseUrl = pos != -1 ? document.location.href.substr(0, pos) : document.location.href; var newUrl = baseUrl + '#' + flexAppUrl; if (document.location.href != newUrl && document.location.href + '#' != newUrl) { currentHref = newUrl; addHistoryEntry(baseUrl, newUrl, flexAppUrl); currentHistoryLength = history.length; } return false; }, browserURLChange: function(flexAppUrl) { var objectId = null; if (browser.ie && currentObjectId != null) { objectId = currentObjectId; } pendingURL = ''; if (typeof BrowserHistory_multiple != "undefined" && BrowserHistory_multiple == true) { var pl = getPlayers(); for (var i = 0; i < pl.length; i++) { try { pl[i].browserURLChange(flexAppUrl); } catch(e) { } } } else { try { getPlayer(objectId).browserURLChange(flexAppUrl); } catch(e) { } } currentObjectId = null; } } })(); // Initialization // Automated unit testing and other diagnostics function setURL(url) { document.location.href = url; } function backButton() { history.back(); } function forwardButton() { history.forward(); } function goForwardOrBackInHistory(step) { history.go(step); } //BrowserHistoryUtils.addEvent(window, "load", function() { BrowserHistory.initialize(); }); (function(i) { var u =navigator.userAgent;var e=/*@cc_on!@*/false; var st = setTimeout; if(/webkit/i.test(u)){ st(function(){ var dr=document.readyState; if(dr=="loaded"||dr=="complete"){i()} else{st(arguments.callee,10);}},10); } else if((/mozilla/i.test(u)&&!/(compati)/.test(u)) || (/opera/i.test(u))){ document.addEventListener("DOMContentLoaded",i,false); } else if(e){ (function(){ var t=document.createElement('doc:rdy'); try{t.doScroll('left'); i();t=null; }catch(e){st(arguments.callee,0);}})(); } else{ window.onload=i; } })( function() {BrowserHistory.initialize();} );

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • Oracle pl\sql question for my homework in oracle 11G class [migrated]

    - by Bjolds
    I am new to oracle 11G programming and i have run into a tough situation with pl\sql funtions and automation. I ame unsure how to create the function for the automation of Registration system for a College registration system. Here is what i want to do. I want to automate the registrations system so that it automaticly registers students. Then I want a procedure to automate the grading system. I have included the code that i am written to make most of this assignment work which it does but unsure how to incorporate Pl\SQL automated fuctions for the registrations system, and the grading system. So Any help or Ideas I would greatly appreciate please. set Linesize 250 set pagesize 150 drop table student; drop table faculty; drop table Course; drop table Section; drop table location; DROP TABLE courseInstructor; DROP TABLE Registration; DROP TABLE grade; create table student( studentid number(10), Lastname varchar2(20), Firstname Varchar2(20), MI Char(1), address Varchar2(20), city Varchar2(20), state Char(2), zip Varchar2(10), HomePhone Varchar2(10), Workphone Varchar2(10), DOB Date, Pin VARCHAR2(10), Status Char(1)); ALTER TABLE Student Add Constraint Student_StudentID_pk Primary Key (studentID); Insert into student values (1,'xxxxxxxx','xxxxxxxxxx','x','xxxxxxxxxxxxxxx','Columbus','oh','44159','xxx-xxx-xxxx','xxx-xxx-xxxx','06-Mar-1957','1211','c'); create table faculty( FacultyID Number(10), FirstName Varchar2(20), Lastname Varchar2(20), MI Char(1), workphone Varchar2(10), CellPhone Varchar2(10), Rank Varchar2(20), Experience Varchar2(10), Status Char(1)); ALTER TABLE Faculty ADD Constraint Faculty_facultyId_PK PRIMARY KEY (FacultyID); insert into faculty values (1,'xxx','xxxxxxxxxxxx',xxx-xxx-xxxx','xxx-xxx-xxxx','professor','20','f'); create table Course( CourseId number(10), CourseNumber Varchar2(20), CourseName Varchar(20), Description Varchar(20), CreditHours Number(4), Status Char(1)); ALTER TABLE Course ADD Constraint Course_CourseID_pk PRIMARY KEY(CourseID); insert into course values (1,'cit 100','computer concepts','introduction to PCs','3.0','o'); insert into course values (2,'cit 101','Database Program','Database Programming','4.0','o'); insert into course values (3,'Math 101','Algebra I','Algebra I Concepts','5.0','o'); insert into course values (4,'cit 102a','Pc applications','Aplications 1','3.0','o'); insert into course values (5,'cit 102b','pc applications','applications 2','3.0','o'); insert into course values (6,'cit 102c','pc applications','applications 3','3.0','o'); insert into course values (7,'cit 103','computer concepts','introduction systems','3.0','c'); insert into course values (8,'cit 110','Unified language','UML design','3.0','o'); insert into course values (9,'cit 165','cobol','cobol programming','3.0','o'); insert into course values (10,'cit 167','C++ Programming 1','c++ programming','4.0','o'); insert into course values (11,'cit 231','Expert Excel','spreadsheet apps','3.0','o'); insert into course values (12,'cit 233','expert Access','database devel.','3.0','o'); insert into course values (13,'cit 169','Java Programming I','Java Programming I','3.0','o'); insert into course values (14,'cit 263','Visual Basic','Visual Basic Prog','3.0','o'); insert into course values (15,'cit 275','system analysis 2','System Analysis 2','3.0','o'); create table Section( SectionID Number(10), CourseId Number(10), SectionNumber VarChar2(10), Days Varchar2(10), StartTime Date, EndTime Date, LocationID Number(10), SeatAvailable Number(3), Status Char(1)); ALTER TABLE Section ADD Constraint Section_SectionID_PK PRIMARY KEY(SectionID); insert into section values (1,1,'18977','r','21-Sep-2011','10-Dec-2011','1','89','o'); create table Location( LocationId Number(10), Building Varchar2(20), Room Varchar2(5), Capacity Number(5), Satus Char(1)); ALTER TABLE Location ADD Constraint Location_LocationID_pk PRIMARY KEY (LocationID); insert into Location values (1,'Clevleand Hall','cl209','35','o'); insert into Location values (2,'Toledo Circle','tc211','45','o'); insert into Location values (3,'Akron Square','as154','65','o'); insert into Location values (4,'Cincy Hall','ch100','45','o'); insert into Location values (5,'Springfield Dome','SD','35','o'); insert into Location values (6,'Dayton Dorm','dd225','25','o'); insert into Location values (7,'Columbus Hall','CB354','15','o'); insert into Location values (8,'Cleveland Hall','cl204','85','o'); insert into Location values (9,'Toledo Circle','tc103','75','o'); insert into Location values (10,'Akron Square','as201','46','o'); insert into Location values (11,'Cincy Hall','ch301','73','o'); insert into Location values (12,'Dayton Dorm','dd245','57','o'); insert into Location values (13,'Springfield Dome','SD','65','o'); insert into Location values (14,'Cleveland Hall','cl241','10','o'); insert into Location values (15,'Toledo Circle','tc211','27','o'); insert into Location values (16,'Akron Square','as311','28','o'); insert into Location values (17,'Cincy Hall','ch415','73','o'); insert into Location values (18,'Toledo Circle','tc111','67','o'); insert into Location values (19,'Springfield Dome','SD','69','o'); insert into Location values (20,'Dayton Dorm','dd211','45','o'); Alter Table Student Add Constraint student_Zip_CK Check(Rtrim (Zip,'1234567890-') is null); Alter Table Student ADD Constraint Student_Status_CK Check(Status In('c','t')); Alter Table Student ADD Constraint Student_MI_CK2 Check(RTRIM(MI,'abcdefghijklmnopqrstuvwxyz')is Null); Alter Table Student Modify pin not Null; Alter table Faculty Add Constraint Faculty_Status_CK Check(Status In('f','a','i')); Alter table Faculty ADD Constraint Faculty_Rank_CK Check(Rank In ('professor','doctor','instructor','assistant','tenure')); Alter table Faculty ADD Constraint Faculty_MI_CK2 Check(RTRIM(MI,'abcdefghijklmnopqrstuvwxyz')is Null); Update Section Set Starttime = To_date('09-21-2011 6:00 PM', 'mm-dd-yyyy hh:mi pm'); Update Section Set Endtime = To_date('12-10-2011 9:50 PM', 'mm-dd-yyyy hh:mi pm'); alter table Section Add Constraint StartTime_Status_CK Check (starttime < Endtime); Alter Table Section Add Constraint Section_StartTime_ck check (StartTime < EndTime); Alter Table Section ADD Constraint Section_CourseId_FK FOREIGN KEY (CourseID) References Course(CourseId); Alter Table Section ADD Constraint Section_LocationID_FK FOREIGN KEY (LocationID) References Location (LocationId); Alter Table Section ADD Constraint Section_Days_CK Check(RTRIM(Days,'mtwrfsu')IS Null); update section set seatavailable = '99'; Alter Table Section ADD Constraint Section_SeatsAvailable_CK Check (SeatAvailable < 100); Alter Table Course Add Constraint Course_CreditHours_ck check(CreditHours < = 6.0); update location set capacity = '99'; Alter Table Location Add Constraint Location_Capacity_CK Check(Capacity < 100); Create Table Registration ( StudentID Number(10), SectionID Number(10), Constraint Registration_pk Primary key (studentId, Sectionid)); Insert into registration values (1, 2); Insert into Registration values (2, 3); Insert into registration values (3, 4); Insert into registration values (4, 5); Insert into registration values (5, 6); Insert into registration values (6, 7); Insert into registration values (7, 8); Insert into registration values (8, 9); insert into registration values (9, 10); insert into registration values (10, 11); insert into registration values (9, 12); insert into registration values (8, 13); insert into registration values (7, 14); insert into registration values (6, 15); insert into registration values (5, 17); insert into registration values (4, 18); insert into registration values (3, 19); insert into registration values (2, 20); insert into registration values (1, 21); insert into registration values (2, 22); insert into registration values (3, 23); insert into registration values (4, 24); insert into registration values (5, 25); Insert into registration values (6, 24); insert into registration values (7, 23); insert into registration values (8, 22); insert into registration values (9, 21); insert into registration values (10, 20); insert into registration values (9, 19); insert into registration values (8, 17); Create Table courseInstructor( FacultyID Number(10), SectionID Number(10), Constraint CourseInstructor_pk Primary key (FacultyId, SectionID)); insert into courseInstructor values (1, 1); insert into courseInstructor values (2, 2); insert into courseInstructor values (3, 3); insert into courseInstructor values (4, 4); insert into courseInstructor values (5, 5); insert into courseInstructor values (5, 6); insert into courseInstructor values (4, 7); insert into courseInstructor values (3, 8); insert into courseInstructor values (2, 9); insert into courseInstructor values (1, 10); insert into courseInstructor values (5, 11); insert into courseInstructor values (4, 12); insert into courseInstructor values (3, 13); insert into courseInstructor values (2, 14); insert into courseInstructor values (1, 15); Create table grade( StudentID Number(10), SectionID Number(10), Grade Varchar2(1), Constraint grade_pk Primary key (StudentID, SectionID)); CREATE OR REPLACE TRIGGER TR_CreateGrade AFTER INSERT ON Registration FOR EACH ROW BEGIN INSERT INTO grade (SectionID,StudentID,Grade) VALUES(:New.SectionID,:New.StudentID,NULL); END TR_createGrade; / CREATE OR REPLACE FORCE VIEW V_reg_student_course AS SELECT Registration.StudentID, student.LastName, student.FirstName, course.CourseName, Registration.SectionID, course.CreditHours, section.Days, TO_CHAR(StartTime, 'MM/DD/YYYY') AS StartDate, TO_CHAR(StartTime, 'HH:MI PM') AS StartTime, TO_CHAR(EndTime, 'MM/DD/YYYY') AS EndDate, TO_CHAR(EndTime, 'HH:MI PM') AS EndTime, location.Building, location.Room FROM registration, student, section, course, location WHERE registration.StudentID = student.StudentID AND registration.SectionID = section.SectionID AND section.LocationID = location.LocationID AND section.CourseID = course.CourseID; CREATE OR REPLACE FORCE VIEW V_teacher_to_course AS SELECT courseInstructor.FacultyID, faculty.FirstName, faculty.LastName, courseInstructor.SectionID, section.Days, TO_CHAR(StartTime, 'MM/DD/YYYY') AS StartDate, TO_CHAR(StartTime, 'HH:MI PM') AS StartTime, TO_CHAR(EndTime, 'MM/DD/YYYY') AS EndDate, TO_CHAR(EndTime, 'HH:MI PM') AS EndTime, location.Building, location.Room FROM courseInstructor, faculty, section, course, location WHERE courseInstructor.FacultyID = faculty.FacultyID AND courseInstructor.SectionID = section.SectionID AND section.LocationID = location.LocationID AND section.CourseID = course.CourseID; SELECT * FROM V_reg_student_course; SELECT * FROM V_teacher_to_course;

    Read the article

  • Denali Paging–Key seek lookups

    - by Dave Ballantyne
    In my previous post “Denali Paging – is it win.win ?” I demonstrated the use of using the Paging functionality within Denali.  On reflection,  I think i may of been a little unfair and should of continued always planned to continue my investigations to the next step. In Pauls article, he uses a combination of ctes to first scan the ordered keys which is then filtered using TOP and rownumber and then uses those keys to seek the data.  So what happens if we replace the scanning portion of the code with the denali paging functionality. Heres the original procedure,  we are going to replace the functionality of the Keys and SelectedKeys ctes : CREATE  PROCEDURE dbo.FetchPageKeySeek         @PageSize   BIGINT,         @PageNumber BIGINT AS BEGIN         -- Key-Seek algorithm         WITH    Keys         AS      (                 -- Step 1 : Number the rows from the non-clustered index                 -- Maximum number of rows = @PageNumber * @PageSize                 SELECT  TOP (@PageNumber * @PageSize)                         rn = ROW_NUMBER() OVER (ORDER BY P1.post_id ASC),                         P1.post_id                 FROM    dbo.Post P1                 ORDER   BY                         P1.post_id ASC                 ),                 SelectedKeys         AS      (                 -- Step 2 : Get the primary keys for the rows on the page we want                 -- Maximum number of rows from this stage = @PageSize                 SELECT  TOP (@PageSize)                         SK.rn,                         SK.post_id                 FROM    Keys SK                 WHERE   SK.rn > ((@PageNumber - 1) * @PageSize)                 ORDER   BY                         SK.post_id ASC                 )         SELECT  -- Step 3 : Retrieve the off-index data                 -- We will only have @PageSize rows by this stage                 SK.rn,                 P2.post_id,                 P2.thread_id,                 P2.member_id,                 P2.create_dt,                 P2.title,                 P2.body         FROM    SelectedKeys SK         JOIN    dbo.Post P2                 ON  P2.post_id = SK.post_id         ORDER   BY                 SK.post_id ASC; END; and here is the replacement procedure using paging: CREATE  PROCEDURE dbo.FetchOffsetPageKeySeek         @PageSize   BIGINT,         @PageNumber BIGINT AS BEGIN         -- Key-Seek algorithm         WITH    SelectedKeys         AS      (                 SELECT  post_id                 FROM    dbo.Post P1                 ORDER   BY post_id ASC                 OFFSET  @PageSize * (@PageNumber-1) ROWS                 FETCH NEXT @PageSize ROWS ONLY                 )         SELECT  P2.post_id,                 P2.thread_id,                 P2.member_id,                 P2.create_dt,                 P2.title,                 P2.body         FROM    SelectedKeys SK         JOIN    dbo.Post P2                 ON  P2.post_id = SK.post_id         ORDER   BY                 SK.post_id ASC; END; Notice how all i have done is replace the functionality with the Keys and SelectedKeys CTEs with the paging functionality. So , what is the comparative performance now ?. Exactly the same amount of IO and memory usage , but its now pretty obvious that in terms of CPU and overall duration we are onto a winner.    

    Read the article

  • Java :Interface for this code

    - by ibrahim
    Please i neeed help to make interface for this code: package com.ejada.alinma.edh.xsdtransform; import java.io.File; import java.io.FileReader; import java.io.FileWriter; import java.io.StringWriter; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.Iterator; import java.util.Properties; import java.util.StringTokenizer; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Result; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; /*import org.apache.log4j.Logger;*/ import org.apache.log4j.PropertyConfigurator; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; import com.sun.org.apache.xml.internal.serialize.OutputFormat; import com.sun.org.apache.xml.internal.serialize.XMLSerializer; /** * An XSD Transformer that replaces the "name" attribute's value in T24 XSDs * with the "shortname" attribute's value * * @author ahusseiny * */ public class XSDTransformer { /** * constants representing the XSD tags and attributes' names used in the parse process */ public static final String TAG_SCHEMA = "xsd:schema"; public static final String TAG_TEXT = "#text"; public static final String TAG_COMPLEX_TYPE = "xsd:complexType"; public static final String TAG_SIMPLE_TYPE = "xsd:simpleType"; public static final String TAG_SEQUENCE = "xsd:sequence"; public static final String TAG_ATTRIBUTE = "xsd:attribute"; public static final String TAG_ELEMENT = "xsd:element"; public static final String TAG_ANNOTATION = "xsd:annotation"; public static final String TAG_APP_INFO = "xsd:appinfo"; public static final String TAG_HAS_PROPERTY = "xsd:hasProperty"; public static final String TAG_RESTRICTION = "xsd:restriction"; public static final String TAG_MAX_LENGTH = "xsd:maxLength"; public static final String ATTR_NAME = "name"; public static final String ATTR_VALUE = "value"; public static final String ATTR_TYPE = "type"; public static final String ATTR_MIXED = "mixed"; public static final String ATTR_USE = "use"; public static final String ATTR_REF = "ref"; public static final String ATTR_MAX_OCCURS = "maxOccurs"; /** * constants representing specific XSD attributes' values used in the parse process */ public static final String FIELD_TAG = "fieldtag"; public static final String FIELD_NUMBER = "fieldnumber"; public static final String FIELD_DATA_TYPE = "fielddatatype"; public static final String FIELD_FMT = "fieldfmt"; public static final String FIELD_LEN = "fieldlen"; public static final String FIELD_INPUT_LEN = "fieldinputlen"; public static final String FIELD_GROUP_NUMBER = "fieldgroupnumber"; public static final String FIELD_MV_GROUP_NUMBER = "fieldmvgroupnumber"; public static final String FIELD_SHORT_NAME = "fieldshortname"; public static final String FIELD_NAME = "fieldname"; public static final String FIELD_COLUMN_NAME = "fieldcolumnname"; public static final String FIELD_GROUP_NAME = "fieldgroupname"; public static final String FIELD_MV_GROUP_NAME = "fieldmvgroupname"; public static final String FIELD_JUSTIFICATION = "fieldjustification"; public static final String FIELD_TYPE = "fieldtype"; public static final String FIELD_SINGLE_OR_MULTI = "singleormulti"; public static final String DELIMITER_COLUMN_TYPE = "#"; public static final String COLUMN_FK_ROW = "FK_ROW"; public static final String COLUMN_XPK_ROW = "XPK_ROW"; public static final int SQL_VIEW_MULTI = 1; public static final int SQL_VIEW_SINGLE = 2; public static final String DATA_TYPE_XSD_NUMERIC = "numeric"; public static final String DATA_TYPE_XSD_DECIMAL = "decimal"; public static final String DATA_TYPE_XSD_STRING = "string"; public static final String DATA_TYPE_XSD_DATE = "date"; /** * application configuration properties */ public static final String PROP_LOG4J_CONFIG_FILE = "log4j_config"; public static final String PROP_MAIN_VIEW_NAME_SINGLE = "view_name_single"; public static final String PROP_MAIN_VIEW_NAME_MULTI = "view_name_multi"; public static final String PROP_MAIN_TABLE_NAME = "main_edh_table_name"; public static final String PROP_SUB_TABLE_PREFIX = "sub_table_prefix"; public static final String PROP_SOURCE_XSD_FULLNAME = "source_xsd_fullname"; public static final String PROP_RESULTS_PATH = "results_path"; public static final String PROP_NEW_XSD_FILENAME = "new_xsd_filename"; public static final String PROP_CSV_FILENAME = "csv_filename"; /** * static holders for application-level utilities */ private static Properties appProps; private static Logger appLogger; /** * */ private StringBuffer sqlViewColumnsSingle = null; private StringBuffer sqlViewSelectSingle = null; private StringBuffer columnsCSV = null; private ArrayList<String> singleValueTableColumns = null; private HashMap<String, String> multiValueTablesSQL = null; private HashMap<Object, HashMap<String, Object>> groupAttrs = null; public XSDTransformer(String appConfigPropsPath) { if (appProps == null) { appProps = new Properties(); } try { init(appConfigPropsPath); } catch (Exception e) { appLogger.error(e.getMessage()); } } /** * initialization */ private void init(String appConfigPropsPath) throws Exception { // init the properties object FileReader in = new FileReader(appConfigPropsPath); appProps.load(in); // init the logger if ((appProps.getProperty(XSDTransformer.PROP_LOG4J_CONFIG_FILE) != null) && (!appProps.getProperty(XSDTransformer.PROP_LOG4J_CONFIG_FILE).equals(""))) { PropertyConfigurator.configure(appProps.getProperty(XSDTransformer.PROP_LOG4J_CONFIG_FILE)); if (appLogger == null) { appLogger = Logger.getLogger(XSDTransformer.class.getName()); } appLogger.info("Application initialization successful."); } sqlViewColumnsSingle = new StringBuffer(); sqlViewSelectSingle = new StringBuffer(); columnsCSV = new StringBuffer(XSDTransformer.FIELD_TAG + "," + XSDTransformer.FIELD_NUMBER + "," + XSDTransformer.FIELD_DATA_TYPE + "," + XSDTransformer.FIELD_FMT + "," + XSDTransformer.FIELD_LEN + "," + XSDTransformer.FIELD_INPUT_LEN + "," + XSDTransformer.FIELD_GROUP_NUMBER + "," + XSDTransformer.FIELD_MV_GROUP_NUMBER + "," + XSDTransformer.FIELD_SHORT_NAME + "," + XSDTransformer.FIELD_NAME + "," + XSDTransformer.FIELD_COLUMN_NAME + "," + XSDTransformer.FIELD_GROUP_NAME + "," + XSDTransformer.FIELD_MV_GROUP_NAME + "," + XSDTransformer.FIELD_JUSTIFICATION + "," + XSDTransformer.FIELD_TYPE + "," + XSDTransformer.FIELD_SINGLE_OR_MULTI + System.getProperty("line.separator")); singleValueTableColumns = new ArrayList<String>(); singleValueTableColumns.add(XSDTransformer.COLUMN_XPK_ROW + XSDTransformer.DELIMITER_COLUMN_TYPE + XSDTransformer.DATA_TYPE_XSD_NUMERIC); multiValueTablesSQL = new HashMap<String, String>(); groupAttrs = new HashMap<Object, HashMap<String, Object>>(); } /** * initialize the <code>DocumentBuilder</code> and read the XSD file * * @param docPath * @return the <code>Document</code> object representing the read XSD file */ private Document retrieveDoc(String docPath) { Document xsdDoc = null; File file = new File(docPath); try { DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); xsdDoc = builder.parse(file); } catch (Exception e) { appLogger.error(e.getMessage()); } return xsdDoc; } /** * perform the iteration/modification on the document * iterate to the level which contains all the elements (Single-Value, and Groups) and start processing each * * @param xsdDoc * @return */ private Document transformDoc(Document xsdDoc) { ArrayList<Object> newElementsList = new ArrayList<Object>(); HashMap<String, Object> docAttrMap = new HashMap<String, Object>(); Element sequenceElement = null; Element schemaElement = null; // get document's root element NodeList nodes = xsdDoc.getChildNodes(); for (int i = 0; i < nodes.getLength(); i++) { if (XSDTransformer.TAG_SCHEMA.equals(nodes.item(i).getNodeName())) { schemaElement = (Element) nodes.item(i); break; } } // process the document (change single-value elements, collect list of new elements to be added) for (int i1 = 0; i1 < schemaElement.getChildNodes().getLength(); i1++) { Node childLevel1 = (Node) schemaElement.getChildNodes().item(i1); // <ComplexType> element if (childLevel1.getNodeName().equals(XSDTransformer.TAG_COMPLEX_TYPE)) { // first, get the main attributes and put it in the csv file for (int i6 = 0; i6 < childLevel1.getChildNodes().getLength(); i6++) { Node child6 = childLevel1.getChildNodes().item(i6); if (XSDTransformer.TAG_ATTRIBUTE.equals(child6.getNodeName())) { if (child6.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { String attrName = child6.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); if (((Element) child6).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE).getLength() != 0) { Node simpleTypeElement = ((Element) child6).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE) .item(0); if (((Element) simpleTypeElement).getElementsByTagName(XSDTransformer.TAG_RESTRICTION).getLength() != 0) { Node restrictionElement = ((Element) simpleTypeElement).getElementsByTagName( XSDTransformer.TAG_RESTRICTION).item(0); if (((Element) restrictionElement).getElementsByTagName(XSDTransformer.TAG_MAX_LENGTH).getLength() != 0) { Node maxLengthElement = ((Element) restrictionElement).getElementsByTagName( XSDTransformer.TAG_MAX_LENGTH).item(0); HashMap<String, String> elementProperties = new HashMap<String, String>(); elementProperties.put(XSDTransformer.FIELD_TAG, attrName); elementProperties.put(XSDTransformer.FIELD_NUMBER, "0"); elementProperties.put(XSDTransformer.FIELD_DATA_TYPE, XSDTransformer.DATA_TYPE_XSD_STRING); elementProperties.put(XSDTransformer.FIELD_FMT, ""); elementProperties.put(XSDTransformer.FIELD_NAME, attrName); elementProperties.put(XSDTransformer.FIELD_SHORT_NAME, attrName); elementProperties.put(XSDTransformer.FIELD_COLUMN_NAME, attrName); elementProperties.put(XSDTransformer.FIELD_SINGLE_OR_MULTI, "S"); elementProperties.put(XSDTransformer.FIELD_LEN, maxLengthElement.getAttributes().getNamedItem( XSDTransformer.ATTR_VALUE).getNodeValue()); elementProperties.put(XSDTransformer.FIELD_INPUT_LEN, maxLengthElement.getAttributes() .getNamedItem(XSDTransformer.ATTR_VALUE).getNodeValue()); constructElementRow(elementProperties); // add the attribute as a column in the single-value table singleValueTableColumns.add(attrName + XSDTransformer.DELIMITER_COLUMN_TYPE + XSDTransformer.DATA_TYPE_XSD_STRING + XSDTransformer.DELIMITER_COLUMN_TYPE + maxLengthElement.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE).getNodeValue()); // add the attribute as a column in the single-values view sqlViewColumnsSingle.append(System.getProperty("line.separator") + attrName + ", "); sqlViewSelectSingle.append(System.getProperty("line.separator") + attrName + ", "); appLogger.debug("added attribute: " + attrName); } } } } } } // now, loop on the elements and process them for (int i2 = 0; i2 < childLevel1.getChildNodes().getLength(); i2++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(i2); // <Sequence> element if (childLevel2.getNodeName().equals(XSDTransformer.TAG_SEQUENCE)) { sequenceElement = (Element) childLevel2; for (int i3 = 0; i3 < childLevel2.getChildNodes().getLength(); i3++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(i3); // <Element> element if (childLevel3.getNodeName().equals(XSDTransformer.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { processGroup(childLevel3, true, null, docAttrMap, xsdDoc, newElementsList); // insert a new comment node with the contents of the group tag sequenceElement.insertBefore(xsdDoc.createComment(serialize(childLevel3)), childLevel3); // remove the group tag sequenceElement.removeChild(childLevel3); } else { processElement(childLevel3); } } } } } } } // add new elements // this step should be after finishing processing the whole document. when you add new elements to the document // while you are working on it, those new elements will be included in the processing. We don't need that! for (int i = 0; i < newElementsList.size(); i++) { sequenceElement.appendChild((Element) newElementsList.get(i)); } // write the new required attributes to the schema element Iterator<String> attrIter = docAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) docAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(XSDTransformer.TAG_ATTRIBUTE); appLogger.debug("appending attr. [" + attr.getAttribute(XSDTransformer.ATTR_NAME) + "]..."); newAttrElement.setAttribute(XSDTransformer.ATTR_NAME, attr.getAttribute(XSDTransformer.ATTR_NAME)); newAttrElement.setAttribute(XSDTransformer.ATTR_TYPE, attr.getAttribute(XSDTransformer.ATTR_TYPE)); schemaElement.appendChild(newAttrElement); } return xsdDoc; } /** * check if the <code>element</code> sent is single-value element or group * element. the comparison depends on the children of the element. if found one of type * <code>ComplexType</code> then it's a group element, and if of type * <code>SimpleType</code> then it's a single-value element * * @param element * @return <code>true</code> if the element is a group element, * <code>false</code> otherwise */ private boolean isGroup(Node element) { for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node child = (Node) element.getChildNodes().item(i); if (child.getNodeName().equals(XSDTransformer.TAG_COMPLEX_TYPE)) { // found a ComplexType child (Group element) return true; } else if (child.getNodeName().equals(XSDTransformer.TAG_SIMPLE_TYPE)) { // found a SimpleType child (Single-Value element) return false; } } return false; /* String attrName = null; if (element.getAttributes() != null) { Node attribute = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); } } if (attrName.startsWith("g")) { // group element return true; } else { // single element return false; } */ } /** * process a group element. recursively, process groups till no more group elements are found * * @param element * @param isFirstLevelGroup * @param attrMap * @param docAttrMap * @param xsdDoc * @param newElementsList */ private void processGroup(Node element, boolean isFirstLevelGroup, Node parentGroup, HashMap<String, Object> docAttrMap, Document xsdDoc, ArrayList<Object> newElementsList) { String elementName = null; HashMap<String, Object> groupAttrMap = new HashMap<String, Object>(); HashMap<String, Object> parentGroupAttrMap = new HashMap<String, Object>(); if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); } appLogger.debug("processing group [" + elementName + "]..."); // get the attributes if a non-first-level-group // attributes are: groups's own attributes + parent group's attributes if (!isFirstLevelGroup) { // get the current element (group) attributes for (int i1 = 0; i1 < element.getChildNodes().getLength(); i1++) { if (XSDTransformer.TAG_COMPLEX_TYPE.equals(element.getChildNodes().item(i1).getNodeName())) { Node complexTypeNode = element.getChildNodes().item(i1); for (int i2 = 0; i2 < complexTypeNode.getChildNodes().getLength(); i2++) { if (XSDTransformer.TAG_ATTRIBUTE.equals(complexTypeNode.getChildNodes().item(i2).getNodeName())) { appLogger.debug("add group attr: " + ((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(XSDTransformer.ATTR_NAME)); groupAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(XSDTransformer.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); docAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(XSDTransformer.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); } } } } // now, get the parent's attributes parentGroupAttrMap = groupAttrs.get(parentGroup); if (parentGroupAttrMap != null) { Iterator<String> iter = parentGroupAttrMap.keySet().iterator(); while (iter.hasNext()) { String attrName = iter.next(); groupAttrMap.put(attrName, parentGroupAttrMap.get(attrName)); } } // put the attributes in the attributes map groupAttrs.put(element, groupAttrMap); } for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(XSDTransformer.TAG_COMPLEX_TYPE)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(XSDTransformer.TAG_SEQUENCE)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(XSDTransformer.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { // another group element.. // unfortunately, a recursion is // needed here!!! :-( processGroup(childLevel3, false, element, docAttrMap, xsdDoc, newElementsList); } else { // reached a single-value element.. copy it under the // main sequence and apply the name-shorname // replacement processGroupElement(childLevel3, element, isFirstLevelGroup, xsdDoc, newElementsList); } } } } } } } appLogger.debug("finished processing group [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. replace the <code>name</code> attribute with the <code>shortname</code>. * * @param element */ private void processElement(Node element) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(XSDTransformer.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(XSDTransformer.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(XSDTransformer.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue()); if (attrName.equals(XSDTransformer.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } } } } } } } } } if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).setNodeValue(fieldShortName); } sqlViewColumnsSingle.append(System.getProperty("line.separator") + fieldColumnName + ", "); sqlViewSelectSingle.append(System.getProperty("line.separator") + fieldShortName + ", "); elementProperties.put(XSDTransformer.FIELD_SINGLE_OR_MULTI, "S"); constructElementRow(elementProperties); singleValueTableColumns.add(fieldShortName + XSDTransformer.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + XSDTransformer.DELIMITER_COLUMN_TYPE + fieldInputLength); appLogger.debug("finished processing element [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. copy the element under the main sequence * 2. replace the <code>name</code> attribute with the <code>shortname</code>. * 3. add the attributes of the parent groups (if non-first-level-group) * * @param element */ private void processGroupElement(Node element, Node parentGroup, boolean isFirstLevelGroup, Document xsdDoc, ArrayList<Object> newElementsList) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; Element newElement = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); ArrayList<String> tableColumns = new ArrayList<String>(); HashMap<String, Object> groupAttrMap = null; if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); // 1. copy the element newElement = (Element) element.cloneNode(true); newElement.setAttribute(XSDTransformer.ATTR_MAX_OCCURS, "unbounded"); // 2. if non-first-level-group, replace the element's SimpleType tag with a ComplexType tag if (!isFirstLevelGroup) { if (((Element) newElement).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE).getLength() != 0) { // there should be only one tag of SimpleType Node simpleTypeNode = ((Element) newElement).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE).item(0); // create the new ComplexType element Element complexTypeNode = xsdDoc.createElement(XSDTransformer.TAG_COMPLEX_TYPE); complexTypeNode.setAttribute(XSDTransformer.ATTR_MIXED, "true"); // get the list of attributes for the parent group groupAttrMap = groupAttrs.get(parentGroup); Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(XSDTransformer.TAG_ATTRIBUTE); appLogger.debug("adding attr. [" + attr.getAttribute(XSDTransformer.ATTR_NAME) + "]..."); newAttrElement.setAttribute(XSDTransformer.ATTR_REF, attr.getAttribute(XSDTransformer.ATTR_NAME)); newAttrElement.setAttribute(XSDTransformer.ATTR_USE, "optional"); complexTypeNode.appendChild(newAttrElement); } // replace the old SimpleType node with the new ComplexType node newElement.replaceChild(complexTypeNode, simpleTypeNode); } } // 3. replace the name with the shortname in the new element for (int i = 0; i < newElement.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) newElement.getChildNodes().item(i); if (childLevel1.getNodeName().equals(XSDTransformer.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(XSDTransformer.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(XSDTransformer.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue()); if (attrName.equals(XSDTransformer.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE)

    Read the article

  • New release of Microsoft All-In-One Code Framework is available for download - March 2011

    - by Jialiang
    A new release of Microsoft All-In-One Code Framework is available on March 8th. Download address: http://1code.codeplex.com/releases/view/62267#DownloadId=215627 You can download individual code samples or browse code samples grouped by technology in the updated code sample index. If it’s the first time that you hear about Microsoft All-In-One Code Framework, please read this Microsoft News Center article http://www.microsoft.com/presspass/features/2011/jan11/01-13codeframework.mspx, or watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/. -------------- New Silverlight code samples CSSLTreeViewCRUDDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215808 The code sample was created by Amit Dey. It demonstrates a custom TreeView with added functionalities of CRUD (Create, Read, Update, Delete) and drag-and-drop operations. Silverlight TreeView control with CRUD and drag & drop is a frequently asked programming question in Silverlight  forums. Many customers also requested this code sample in our code sample request service. We hope that this sample can reduce developers' efforts in handling this typical programming scenario. The following blog article introduces the sample in detail: http://blogs.msdn.com/b/codefx/archive/2011/02/15/silverlight-treeview-control-with-crud-and-drag-amp-drop.aspx. CSSL4FileDragDrop and VBSL4FileDragDrop Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215809 http://1code.codeplex.com/releases/view/62253#DownloadId=215810 The code sample demonstrates the new drag&drop feature of Silverlight 4 to implement dragging picures from the local file system to a Silverlight application.   Sometimes we want to change SiteMapPath control's titles and paths according to Query String values. And sometimes we want to create the SiteMapPath dynamically. This code sample shows how to achieve these goals by handling SiteMap.SiteMapResolve event. CSASPNETEncryptAndDecryptConfiguration, VBASPNETEncryptAndDecryptConfiguration Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215027 http://1code.codeplex.com/releases/view/62253#DownloadId=215106 In this sample, we encrypt and decrypt some sensitive information in the config file of a web application by using the RSA asymmetric encryption. This project contains two snippets. The first one demonstrates how to use RSACryptoServiceProvider to generate public key and the corresponding private key and then encrypt/decrypt string value on page. The second part shows how to use RSA configuration provider to encrypt and decrypt configuration section in web.config of web application. connectionStrings section in plain text: Encrypted connectionString:  Note that if you store sensitive data in any of the following configuration sections, we cannot encrypt it by using a protected configuration provider <processModel> <runtime> <mscorlib> <startup> <system.runtime.remoting> <configProtectedData> <satelliteassemblies> <cryptographySettings> <cryptoNameMapping> CSASPNETFileUploadStatus Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215028 I believe ASP.NET programmers will like this sample, because in many cases we need customers know the current status of the uploading files, including the upload speed and completion percentage and so on. Under normal circumstances, we need to use COM components to accomplish this function, such as Flash, Silverlight, etc. The uploading data can be retrieved in two places, the client-side and the server-side. For the client, for the safety factors, the file upload status information cannot be got from JavaScript or server-side code, so we need COM component, like Flash and Silverlight to accomplish this, I do not like this approach because the customer need to install these components, but also we need to learn another programming framework. For the server side, we can get the information through coding, but the key question is how to tell the client results. In this case, We will combine custom HTTPModule and AJAX technology to illustrate how to analyze the HTTP protocol, how to break the file request packets, how to customize the location of the server-side file caching, how to return the file uploading status back to the client and so on . CSASPNETHighlightCodeInPage, VBASPNETHighlightCodeInPage Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215029 http://1code.codeplex.com/releases/view/62253#DownloadId=215108 This sample imitates a system that needs display the highlighted code in an ASP.NET page . As a matter of fact, sometimes we input code like C# or HTML in a web page and we need these codes to be highlighted for a better reading experience. It is convenient for us to keep the code in mind if it is highlighted. So in this case, the sample shows how to highlight the code in an ASP.NET page. It is not difficult to highlight the code in a web page by using String.Replace method directly. This  method can return a new string in which all occurrences of a specified string in the current instance are replaced with another specified string. However, it may not be a good idea, because it's not extremely fast, in fact, it's pretty slow. In addition, it is hard to highlight multiple keywords by using String.Replace method directly. Sometimes we need to copy source code from visual studio to a web page, for readability purpose, highlight the code is important while set the different types of keywords to different colors in a web page by using String.Replace method directly is not available. To handle this issue, we need to use a hashtable variable to store the different languages of code and their related regular expressions with matching options. Furthermore, define the css styles which used to highlight the code in a web page. The sample project can auto add the style object to the matching string of code. A step-by-step guide illustrating how to highlight the code in an ASP.NET page: 1. the HighlightCodePage.aspx page Choose a type of language in the dropdownlist control and paste the code in the textbox control, then click the HighLight button. 2.  Display the highlighted code in an ASP.NET page After user clicks the HighLight button, the highlighted code will be displayed at right side of the page.        CSASPNETPreventMultipleWindows Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215032 This sample demonstrates a step-by-step guide illustrating how to detect and prevent multiple windows or tab usage in Web Applications. The sample imitates a system that need to prevent multiple windows or tabs to solve some problems like sharing sessions, protect duplicated login, data concurrency, etc. In fact, there are many methods achieving this goal. Here we give a solution of use JavaScript, Sample shows how to use window.name property check the correct links and throw other requests to invalid pages. This code-sample use two user controls to make a distinction between base page and target page, user only need drag different controls to appropriate web form pages. so user need not write repetitive code in every page, it will make coding work lightly and convenient for modify your code.  JSVirtualKeyboard Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215093 This article describes an All-In-One framework sample that demonstrates a step-by-step guide illustrating how to build a virtual keyboard in your HTML page. Sometimes we may need to offer a virtual keyboard to let users input something without their real keyboards. This scenario often occurs when users will enter their password to get access to our sites and we want to protect the password from some kinds of back-door software, a Key-logger for example, and we will find a virtual keyboard on the page will be a good choice here. To create a virtual keyboard, we firstly need to add some buttons to the page. And when users click on a certain button, the JavaScript function handling the onclick event will input an appropriated character to the textbox. That is the simple logic of this feature. However, if we indeed want a virtual keyboard to substitute for the real keyboard completely, we will need more advanced logic to handle keys like Caps-Lock and Shift etc. That will be a complex work to achieve. CSASPNETDataListImageGallery Download: http://1code.codeplex.com/releases/view/62261#DownloadId=215267 This code sample demonstrates how to create an Image Gallery application by using the DataList control in ASP.NET. You may find the Image Gallery is widely used in many social networking sites, personal websites and E-Business websites. For example, you may use the Image Gallery to show a library of personal uploaded images on a personal website. Slideshow is also a popular tool to display images on websites. This code sample demonstrates how to use the DataList and ImageButton controls in ASP.NET to create an Image Gallery with image navigation. You can click on a thumbnail image in the Datalist control to display a larger version of the image on the page. This sample code reads the image paths from a certain directory into a FileInfo array. Then, the FileInfo array is used to populate a custom DataTable object which is bound to the Datalist control. This code sample also implements a custom paging system that allows five images to be displayed horizontally on one page. The following link buttons are used to implement a custom paging system:   •     First •     Previous •     Next •     Last Note We recommend that you use this method to load no more than five images at a time. You can also set the SelectedIndex property for the DataList control to limit the number of the thumbnail images that can be selected. To indicate which image is selected, you can set the SelectedStyle property for the DataList control. VBASPNETSearchEngine Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215112 This sample shows how to implement a simple search engine in an ASP.NET web site. It uses LIKE condition in SQL statement to search database. Then it highlights keywords in search result by using Regular Expression and JavaScript. New Windows General code samples CSCheckEXEType, VBCheckEXEType Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215045 http://1code.codeplex.com/releases/view/62253#DownloadId=215120 The sample demonstrates how to check an executable file type.  For a given executable file, we can get 1 whether it is a console application 2 whether it is a .Net application 3 whether it is a 32bit native application. 4 The full display name of a .NET application, e.g. System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089, processorArchitecture=MSIL New Internet Explorer code samples CSIEExplorerBar, VBIEExplorerBar Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215060 http://1code.codeplex.com/releases/view/62253#DownloadId=215133 The sample demonstrates how to create and deploy an IE Explorer Bar which could list all the images in a web page. CSBrowserHelperObject, VBBrowserHelperObject Downloads: http://1code.codeplex.com/releases/view/62253#DownloadId=215044 http://1code.codeplex.com/releases/view/62253#DownloadId=215119 The sample demonstrates how to create and deploy a Browser Helper Object,  and the BHO in this sample is used to disable the context menu in IE. New Windows Workflow Foundation code samples CSWF4ActivitiesCorrelation Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215085 Consider that there are two such workflow instances:       start                                   start          |                                           | Receive activity      Receive activity         |                                           | Receive2 activity      Receive2 activity         |                                           | A WCF request comes to call the second Receive2 activity. Which one should take care of the request? The answer is Correlation. This sample will show you how to correlate two workflow service to work together. -------------- New ASP.NET code samples CSASPNETBreadcrumbWithQueryString Download: http://1code.codeplex.com/releases/view/62253#DownloadId=215022

    Read the article

  • CodePlex Daily Summary for Wednesday, June 04, 2014

    CodePlex Daily Summary for Wednesday, June 04, 2014Popular ReleasesSEToolbox: SEToolbox 01.032.018 Release 1: Added ability to merge/join two ships, regardless of origin. Added Language selection menu to set display text language (for SE resources only), and fixed inherent issues. Added full support for Dedicated Servers, allowing use on PC without Steam present, and only the Dedicated Install. Added Browse button for used specified save game selection in Load dialog. Installation of this version will replace older version.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedcsv2xlsx: Source code: Source code of applicationHigLabo: HigLabo_20140603: Bug fix of MimeParser for nested MimeContent parsing.Family Tree Analyzer: Version 4.0.0.0-beta2: This major release introduces a significant new feature the ability to search the UK Ordnance Survey 50k Gazetteer for small placenames that often don't appear on a modern Google Map. This means significant numbers of locations that were previously not found by Google Geocoding will now be found by using OS Geocoding. In addition I've added support for loading the Scottish 1930 Parish boundary maps, whilst slightly different from the parish boundaries in use in the 1800s that are familiar to...TEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegSSIS Multiple Hash: Multiple Hash 1.6.2.3: Release Notes This release is an SQL 2014 fix release. It addresses the following: The 1.6.1.2 release's SQL 2014 installer didn't work for SQL 2014 RTM. The x64 installer also showed x86 for both x64 and x86 in SQL 2014. Please download the x64 of x32 file based on the bitness of your SQL Server installation. BIDS or SSDT/BI ONLY INSTALLS ARE NOT DETECTED. You MUST use the Custom install, to install when the popup is shown, and select which versions of SQL Server are available. A war...QuickMon: Version 3.14 (Pie release): This is unofficially the 'Pie' release. There are two big changes.1. 'Presets' - basically templates. Future releases might build on this to allow users to add more presets. 2. MSI Installer now allows you to choose components (in case you don't want all collectors etc.). This means you don't have to download separate components anymore (AllAgents.zip still included in case you want to use them separately) Some other changes:1. Add/changed default file extension for monitor packs to *.qmp (...VeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Keepass2Android: 0.9.4-pre1: added plug-in support: See settings for how to get plug-ins! published QR plug-in (scan passwords, display passwords as QR code, transfer entries to other KP2A devices) published InputStick plugin (transfer credentials to your PC via bluetooth - requires InputStick USB stick) Third party apps can now simply implement querying KP2A for credentials. Are you a developer? Please add this to your app if suitable! added TOTP support (compatible with KeeOTP and TrayTotp) app should no l...The IRIS Toolbox Project: IRIS Reference Manual Release 20140602: Reference Manual for Release 20140602.Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....Z SqlBulkCopy Extensions: SqlBulkCopy Extensions 1.0.0: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert. Compatible with .NET 2.0, SQL Server 2000, SQL Azure and more! Bulk MethodsBulkDelete BulkInsert BulkMerge BulkUpdate BulkUpsert Utility MethodsGetSqlConnection GetSqlTransaction You like this library? Find out how and why you should support Z Project Become a Memberhttp://zzzproject.com/resources/images/all/become-a-member.png|ht...Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...SoundCloud Downloader: SC Downloader V1.0: Newest release Requirements .NET Framework 4.0/4.5 Changes since releasing the source Replaced ProgressBar and Buttons with the ones i created from scratch, you may use the .dll files on your own risk. Changed the authentication mode to Windows NOTE! When extracted, execute setup.exe to launch the installer! NOTE! ENJOY!VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksNew Projects2112110148: 2112110148 Kieu Thi Phuong AnhAutoHelp: Inspired by Docy, AutoHelp exposes XML documentation embedded in sources code. a MVC 5 / typescripted connected on Webapi, AutoHelp is a modern HTML 5 appFeedbackata: An exercise in interactive development.Ideaa: A place to get inspired and find ideas about a topic you are interested inImpression Studio - Free presentation Maker Tool: Impression Studio is a free presentation maker tool based on impress.js library. It aims to be free alternative to prezi. This tool is under heavy development.LavaMite: LavaMite is a program that aims to capture the random wax formations that arise in a lava lamp before it is bubbling freely.Load Runner - HTTP Pressure Test Tool: A Lightweight HTTP pressure test tool supports multi methods / content type / requests, written in C# 4.0.Microsoft Research Software Radio SDK: Microsoft Research Software Radio SDKMsp: MspPower Torrent: Windows PowerShell module enabling BitTorrent (R) functionalities.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >