Search Results

Search found 493 results on 20 pages for 'shipping'.

Page 12/20 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • In general, do 3rd-party laptop AC adapters / chargers work reliably?

    - by MGOwen
    The AC adapters for my 2 dell laptops wore out very quickly. One supplies power but won't charge the battery (it's about 3 years old, the other is a bit newer). Both have worn through to the shielding around the notebook connector and look like they'll be unusable soon. Checking user reviews of the replacement adapter on Dell's own website, it appears they usually fail this fast. Apparently Dell does this deliberately to make money (their adapters sell for about 10-15 times what they cost to make). Same with replacement batteries. I can see there are plenty of much cheaper ($50 less) compatible AC adapters on eBay. Does anyone have experience with these? Naturally I'm nervous a crappy knock-off could fry my notebook, but has this ever actually happened to anyone recently? Has anyone had a good experience? Can anyone recommend a good online seller (who doesn't charge nonsensical shipping costs to Australia, preferably?)

    Read the article

  • Stylecop 4.7.37.0 has been released

    - by TATWORTH
    Stylecop  4.7.37.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Add docs for new SA1650 spelling rule.Fix for 7395. Dont remove parenthesis around await expressions.Insert a returns element into docs within a see element.Update our tools folder StyleCop dll'sfix for 7392. Insert generic type docs for return types correctly.Fix for 7393. Allow documentation elements with attributes to end the string and still be valid.Make sure the MSBuild Task logs the warning id and type of exception. Unless the description field holds all this info VS cannot show the text in the Error List.Load custom dictionaries for multiple cultures. For a culture like en-GB; we load CustomDictionary.xml, then look for CustomDictionary.en-GB.xml and then CustomDictionary.en.xmlUpdate standard shipping dictionaries.Element documentation spelling fixes.Reduce the standard dictionaryUpdate our own devbuild StyleCop checks.Don't check spelling of xml documentation attributes are anything inside  <c> or <code> elements.Update StylingStyling update.Add timestamps for all the dependant files into the StyleCopResults.cache. Add a FileSystemWatcher to all custom dictionary files.Write out the full violation into the StyleCopResults.cache.Change a rules description text.Styling fixes.Styling fixes.NEW RULE: Check Spelling Of Element Documetation. Fix over 2000 spelling errors in our source code. Update the VS addin to show the rule violation in more detail. Add spelling checker to the deployment.Set our own Culture to en-USDocumentation spelling fixes.First draft of the documentation spelling checker.Fix for 7325. Don't throw 1126 in goto statements.Fix for 7090. Add TargetsDir to registry during install.Fix for 7060. Sort usings after moving them inside namespace.Fix FxCop issues.Fix for 7389. Detect CpuCount on Unix/MACFix for 6788. Allow opening curly brackets for scope. Added new tests.Updating constants.Fix for 7167. Show version number of StyleCop in VS Help window.Only output StyleCop excluded files if there are any.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • What is an Enterprise Resource Planning (ERP) System?

    In order to understand what an Enterprise Resource Planning System is let us look at a classic American kids snack, the Rice Krispy Treat if we conceptually view the treat as a company’s internal applications as a whole.  Furthermore we can view a company’s departmentalized software applications as the theoretical Rice Krispies in the treat. In addition, the Rice Krispies consist of a combination of ingredients that be broken down into data, user interfaces and business logic. Next, we have the margarine or butter that is used to help the marshmallows bind with the Rice Krispies; this role in our conceptual view is taken by a data source typically as a relational database management system. Finally we have the melted marshmallows which act as the ERP software that connects all of the individual departmental software applications in to one unified system that allows all user one unified system to interact with all of the individual dispersed systems. An example of this would be if a customer places an order with a telephone operator and once the orders is processed an employee in the shipping department can see the order ready for fulfillment on his order screen. The ERP acts a go between for various independent departmental systems so that they can integrate with one another.

    Read the article

  • Pervasive database backup

    - by Steven
    I'm looking for the best way to backup my pervasive database. I've read the documentation but still have a few questions. It appears that Continuous Operations method only allows me to backup the entire database? So I'd do butil -startbu @filelist, then backup the entire database (copy, rsync, etc), then run butil -endbu @filelist. Looking through the documentation I don't see a way to get transaction logs out of this method; like I would do for MSSQL (BACKUP LOG ACCT TO DISK) or Postgres (archive_command). With rsync, it might be feasible to still do this every 15 minutes. The Archival Logging method means I would have to occasionally stop the database to get a full backup, which is acceptable for me. But can I copy the log files off of the server every 15 minutes, ie log shipping? Thank you.

    Read the article

  • Security considerations for my first eStore.

    - by Rohit
    I have a website through which I am going to sell few products. It is hosted on a simple shared-hosting and does not have SSL. On the products page, each product has a Buy Now button created from my PayPal Merchant account. PayPal recommends to use it's Button Factory to create secure buttons and save it inside PayPal itself. I have followed the same advice and the code of any button is secure and does not disclose any information on either a product or it's price. When the user clicks on a Buy Now button, he/she is taken to PayPal site where a page is opened in SSL for the user to fill in the credit card and shipping details. After a successful transaction, the control is passed back to my site. I want to know whether there is still any chance when security could be compromised.

    Read the article

  • Flow Fields in Dynamics NAV

    - by T
    I don’t know the exact business reason but someone asked how to identify all sales shipping orders with any negative quantities on them.  They needed to print these separately from the shipments that only have all positive quantities. Here is one way to solve this problem In NAV, open Sales Shipment Header Add a field of type Boolean (The field type should match the value you expect to return from the flow field method.  In this case we will use an exist so we expect a bool but we could have used a Sum, Average, Min, Max, Count, or a Lookup to return a value). Define it as a flow field and click on the ellipse next to CalcFormula Now [Has Negative Qty] is false if no negatives exist And [Has Negative Qty] and true if a negative exist So it is ready to be used as a filter on the report. Don’t forget that if you are using it in code, you may need to CalcFields. Hope that helps.  If you really can’t afford the field in your header, you can use code to check the lines for a negative value each time and use a skip or break function to skip that header record but it seems expensive to check them all if you only want a few to print. Please let me know if you think of a better solution.

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • Is it possible to have Grub2's boot.img in the MBR and have it load core.img from a separate boot pa

    - by wesley
    I have a multiboot system that I would like to use Grub to manage. The version of Grub shipping with my Linux distro is Grub2, and it installs its equivalent of stage 1.5-2, core.img, into the remaining sectors on the first track after the MBR but before the first partition. Unfortunately, those sectors are needed by another program. I have a separate primary /boot partition. If I could only keep boot.img as my MBR but have it look in the /boot partition for core.img rather than the embedded one in the sectors immediately following the MBR, everything would work fine. Is this possible with grub2?

    Read the article

  • Trade In, Trade Up Promotion: SPARC Consolidation Now Through May 31st

    - by swalker
    Dear Partner, Installed Base Business (IBB) technology refresh is one of the most important activities for Oracle, for you and for your customers. It allows your existing customers to benefit from the most up-to-date, best-of-breed Oracle products. And it’s an exciting time to perform a technology refresh: a new SPARC promotion is available now, closing 31st May 2012. Customers trading in older SPARC systems and upgrading to a new SPARC SuperCluster T4-4 or SPARC Enterprise M8000/M9000 can get $4,000 per CPU. Discount is pre-approved and upfront (maximum discounts apply). The major highlights are as follows: Targeted Systems: Upgrade to SPARC M8000, M9000, SuperCluster Qualified installed base upgrade from: All older-generations of SPARC systemsPromotional offer: Trade-in Value: $4K per CPU Pre-approved maximum discount (including trade-in) not to exceed 60% on M8/9000 systems and 25% on SuperCluster No-cost dock-to-dock shipping, and environmentally safe disposal of the returned hardware through Oracle best-of-class recycling processes. Recommendations: We recommend you to take the following actions: As usual, please register your opportunities in OMM When you do so, please make sure you place the following Campaign Names in the “Marketing Initiative” field of OMM: Campaign Name : EMEA_Tech Refresh-IBB Campaign_12H1_Follow Up_O For all the details: Please view rules, and FAQs. For more information, please visit the Promo Partner Site here. For more information on IBB and the Oracle Upgrade Advantage Program (UAP):http://www.oracle.com/us/products/servers-storage/upgrade-advantage-program/index.html http://www.oracle.com/partners/secure/sales/oracle-ibb-program-for-partners-184291.html Contacts: For questions, please contact your favorite Oracle Partner Account Manager.

    Read the article

  • Oracle E-Business Products New Search Helpers for Guided Resolution of Customer Issues

    - by user793044
    Oracle E-Business Proactive Support has created many new guided resolution documents that you may find helpful in resolving issues in your EBS applications.  These new documents are called “Search Helpers” and they guide you through your issue to a solution.  They are meant to be an easy and fast method to finding a relevant, complete solution. Hundreds of notes and service requests were reviewed and the best solutions to these known issues were selected.  For some issues, notes were updated to better clarify the solution.  In other cases, if a note with a solution did not already exist, one was created. You start the process by selecting the scenario you have encountered.  You may have received an error message, or there may be a particular area of the application in which you have encountered an issue.  Based on your selection of the issue, the Search Helper will present one or more additional possible symptoms.  When you have selected from both of these two sections, you are then presented with one or more articles known to have fully solved this issue in the past.  Several EBS products have produced Search Helpers documents.  Take a look at Doc ID 1501724.1 for an index of the current EBS Search Helpers.  Here is an example of a Search Helper from the Receivables Transactions area: After selecting the Functional Area of "Entering / Updating Transactions" a list of Known Symptoms is presented: And, when "Transaction numbers are not in sequence" is selected, a solution link is provided for Document ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. The EBS applications that currently have published Search Helpers are: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • How to recover from locked down XP Home computer

    - by Chris
    We've got a Kiosk machine provided to us by a manufacturer. The video card is flaky, so I want to replace it with another card we have on hand, rather than shipping across the country. The problem is that they have policies in place that locks the system down to a point where only the manufacturers demo works on the computer so I can't install drivers for the newer card. I know pretty much nothing about windows policies or the policy editor. Am I fighting a losing battle trying to replace thi scard?

    Read the article

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

  • Automatically copy files out of directory

    - by wizard
    I had a user's laptop stolen recently during shipping and it was setup with windows live sync. The thief or buyer's kids took some photos of themselves and they were synced to the user's my documents. I had just finished moving the users files out of the synced my documents folder when I noticed this. Later they took some more photos and a video. I wrote up a batch script to copy files out synced directory every 5 minutes into a dated directory. In the end I ended up with a lot of copies of the same few files. Ignoring what windows livesync offers (at the time there was no way to undelete files - I've moved onto dropbox so this ins't really an issue for me) what's the best way to preserve changes and files from a directory? I'm interested in windows solutions but if you know of a good way on a *nix please go ahead and share.

    Read the article

  • Community TFS Build Manager available for Visual Studio 2012 RC

    - by Jakob Ehn
    I finally got around to push out a version of the Community TFS Build Manager that is compatible with Visual Studio 2012 RC. Unfortunately I had to do this as a separate extension, it references different versions of the TFS assemblies and also some properties and methods that the 2010 version uses are now obsolete in the TFS 2012 API. To download it, just open the Extension Manager, select Online and search for TFS Build:   You can also download it from this link: http://visualstudiogallery.msdn.microsoft.com/cfdb84b4-285e-4eeb-9fa9-dad9bfe2cd10 The functionality is identical to the 2010 version, the only difference is that you can’t start it from the Team Explorer Builds node (since the TE has been completely rewritten and the extension API’s are not yet published). So, to start it you must use the Tools menu: We will continue shipping updates to both versions in the future, as long as it functionality that is compatible with both TFS 2010 and TFS 2012. You might also note that the color scheme used for the build manager doesn’t look as good with the VS2012 theme….   Hope you will enjoy the tool in Visual Studio 2012 as well. I want to thank all the people who have downloaded and used the 2010 version! For feedback, feature requests, bug reports please post this to the CodePlex site: http://tfsbuildextensions.codeplex.com

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • High-End Gaming Desktop PC in France [closed]

    - by Lerikunus
    Hello, I was looking to buy an ALIENWARE AREA 51, as my Desktop crashed some days ago. But the Preliminary Ship date is 03.01.11 and I need the new PC until Friday next week =( Does someone know a(n) (online)-store where Build&Shipping time is quite low so I can get it until next Friday? I am living in France (Nice). Or any place I can ask fot his kind of advice? This is my desired configuration: Overclocked Intel® Core™ i7 980x Extreme Six Core Processor (4.0GHz, 12MB Cache) 12GB Triple Channel 1600MHz DDR3 Dual 2GB ATI Radeon™ HD 6950 2TB RAID 1+0 (4x 1TB SATA-II, 7,200 RPM, 32MB Cache HDDs) 2TB RAID 1 (2x2TB SATA-II, 7,200 RPM, 32MB Cache HDDs) Thank you very much!

    Read the article

  • Best approach for tracking dependent state

    - by Pace
    Let's pretend I work on a project tracking application. The application is a database backed, server hosted, web application. In this application there are Projects which have many Activities which have many Tasks. A Task has two date fields an originalDueDate and a projectedDueDate. In addition, there are dynamic fields on the Activities and the Projects which indicate whether the Activity or Project is behind schedule based on the projected due dates of the child tasks and various other variables such as remaining buffer time, etc. There are a number of things that can cause the projectedDueDate to change. For example, an employee working on the project may (via a server request) enter in a shipping delay. Alternatively, a site may (via a server request) enter in an unexpected closure. When any of these things occur I need to not only update the projectedDueDate of the Task but also trigger the corresponding Project and Activity to update as well. What is the best way to do this? I've thought of the observer pattern but I don't keep a single copy of all these objects in memory. When a request comes in, I query the Task in from the database, at that point there is no associated Activity in memory that would be a listener. I could remove the ability to query for Tasks and force the application to query first by Project, then by Activity (in context of Project), then by task (in context of Activity) adding the observer relationships at each step but I'm not sure if that is the best way. I could setup a database event listening system so when a Task modified event is dispatched I have a handler which queries for the Activity at that point. I could simply setup a two-way relationship between Task and Activity so that the Task knows about the parent Activity and when the Task updates his state the Task grabs his parent and updates state. Right now I'm stuck considering all the options and am wondering if any single approach (doesn't have to be a listed approach) is jumping out at others as the best approach.

    Read the article

  • Is it possible to have Grub2's boot.img in the MBR and have it load core.img from a separate boot partition?

    - by wesley
    I have a multiboot system that I would like to use Grub to manage. The version of Grub shipping with my Linux distro is Grub2, and it installs its equivalent of stage 1.5-2, core.img, into the remaining sectors on the first track after the MBR but before the first partition. Unfortunately, those sectors are needed by another program. I have a separate primary /boot partition. If I could only keep boot.img as my MBR but have it look in the /boot partition for core.img rather than the embedded one in the sectors immediately following the MBR, everything would work fine. Is this possible with grub2?

    Read the article

  • Move 2TB Win7x64 to 640GB HDD

    - by reedacus25
    Basically I just got a new (refurbished) HP H8-1260t to use for Windows Media Center. The shipping drive is a 2TB Hitachi drive, but I have an older and barely used 640GB WD Caviar Blue 7200RPM drive I would like to move my Windows install to. I have about 750GB of data with over 1TB free left on the drive. Having googled my heart and fingers out, I would think that I can shrink my partition down to the size it is full at now with over 1TB free for a second partition where I can move my media files to further shrink my boot partition so that I can fit a system image smaller than 640GB on that WD drive. I want to have a separate drive for Media and TV Recordings from my boot disk. Does this sound like the easiest method for me? Or am I doing this all wrong. Help appreciated.

    Read the article

  • Hosting ESXI (free edition) [closed]

    - by Peter Adss
    We currently have one physical server running the free version of VMWare ESXi that virtualizes a Win SBS 2003 server and a Citrix server. We need to collocate the server and are investigating our options. Are there places that will host our virtual servers and save us the expense of shipping the physical server out for collocation. In my mind we'd copy the Vms to disk and ship them out. Does the fact that we're using the free version of ESXi create a barrier to this idea? Thanks for the help, I realize this is a stupid question.

    Read the article

  • WPF DataGrid does not scoll on drag

    - by Greg R
    I have a strange problem with a WPF DataGrid from WPF Toolkit. The scrollbars appear correctly when the number of rows grows, and the scrolling works when you press up or down arrows on the scrollbar. The problem comes in when I try to drag the scrollbar on the datagrid. My page has a scroll viewer around it. When I click and drag the scrollbar on the grid, it scrolls the page scroller instead. If the scrollbar does not apear on the page, there grid still does not scroll. Is there a workaround for this??? Would really appreciate some help with this issue! For example, in this case if the page is < 280, it scrolls on drag. But drag scrolling does not work on the grid. <ScrollViewer ScrollViewer.IsDeferredScrollingEnabled="True" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto" > <DockPanel> <dg:DataGrid HorizontalScrollBarVisibility="Auto" SelectionMode="Single" CanUserAddRows="False" CanUserDeleteRows="False" CanUserResizeColumns="False" CanUserSortColumns="False" AutoGenerateColumns="False" RowHeaderWidth="17" ItemsSource="{Binding Path=OrderSearchVm}" IsReadOnly="True" MaxHeight="280" DockPanel.Dock="Top"> <dg:DataGrid.Columns> <dg:DataGridTextColumn Width="75" Header="Date" Binding="{Binding Path=OrderDate}" > <dg:DataGridTextColumn.ElementStyle> <Style TargetType="{x:Type TextBlock}"> <Setter Property="TextWrapping" Value="Wrap" /> </Style> </dg:DataGridTextColumn.ElementStyle> </dg:DataGridTextColumn> <dg:DataGridTextColumn Header="Type" Binding="{Binding Path=OrderType}" Width="45"/> <dg:DataGridTextColumn Header="Shipping Name" Binding="{Binding Path=ShipToName}" Width="115"> <dg:DataGridTextColumn.ElementStyle> <Style TargetType="{x:Type TextBlock}"> <Setter Property="TextWrapping" Value="Wrap" /> </Style> </dg:DataGridTextColumn.ElementStyle> </dg:DataGridTextColumn> <dg:DataGridTextColumn Header="Shipping Address " Binding="{Binding Path=ShipToAddress}" Width="160"> <dg:DataGridTextColumn.ElementStyle> <Style TargetType="{x:Type TextBlock}"> <Setter Property="TextWrapping" Value="Wrap" /> </Style> </dg:DataGridTextColumn.ElementStyle> </dg:DataGridTextColumn> <dg:DataGridTextColumn Header="E-Mail" Binding="{Binding Path=Email}" Width="140"> <dg:DataGridTextColumn.ElementStyle> <Style TargetType="{x:Type TextBlock}"> <Setter Property="TextWrapping" Value="Wrap" /> </Style> </dg:DataGridTextColumn.ElementStyle> </dg:DataGridTextColumn> </dg:DataGrid.Columns> </dg:DataGrid> </DockPanel> </ScrollViewer>

    Read the article

  • Pass existing model into AJAX PartialViewResult

    - by Joe
    I’m using AJAX to asynchronously update a partial view and need to pass the existing model into the partial view. Controller Action public ActionResult Edit(int id) { var vM = new MyViewModel(); // vM is viewModel return View(vM); } Edit View @using (Html.BeginForm()) { @Html.ValidationSummary(true) ... <span id = "Ship"> @Html.Partial("AJAX_Views/_Ship")</span> _Ship Partial View @model MyProject.ViewModels.MyViewModel <table class="detailtable" style="min-width:398px"> <tr> <th style="padding-left:132px" colspan="2"> <span class="editor-label">Shipping&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Address</span> <span class="editor-label" style="padding-left:55px"> @Ajax.ActionLink("Delete", "SHIPDEL", new AjaxOptions { UpdateTargetId = "Ship", InsertionMode = InsertionMode.Replace, HttpMethod = "Get" })</span> <tr><th style="min-width:152px"><span class="editor-label">Street:</span></th> @Html.HiddenFor(m => m.CmpAdrsSh.Id) @Html.HiddenFor(m => m.CmpAdrsSh.CompPersonId) @Html.HiddenFor(m => m.CmpAdrsSh.IsShip) <td><span class="editor-field">@Html.EditorFor(m => m.CmpAdrsSh.Street) @Html.ValidationMessageFor(m => m.CmpAdrsSh.Street) </span></td></tr> <tr><th><span class="editor-label">City:</span></th> <td><span class="editor-field">@Html.EditorFor(m => m.CmpAdrsSh.City) @Html.ValidationMessageFor(m => m.CmpAdrsSh.City) </span></td></tr> <tr><th><span class="editor-label">State:</span></th> <td><span class="editor-field">@Html.DropDownList("CmpAdrsSh.State", (IEnumerable<SelectListItem>)ViewBag._State) @Html.ValidationMessageFor(m => m.CmpAdrsSh.State) </span></td></tr> <tr><th><span class="editor-label">Zip:</span></th> <td><span class="editor-field">@Html.EditorFor(m => m.CmpAdrsSh.Zip) @Html.ValidationMessageFor(m => m.CmpAdrsSh.Zip) </span></td></tr> _ShipDel Partial View @model MyProject.ViewModels.MyViewModel <table class="detailtable" style="min-width:398px"> <tr> <th style="padding-left:132px" colspan="2"> <span class="editor-label">Shipping&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Address</span> <span class="editor-label" style="padding-left:10px; color:red">Marked for Deletion.</span></th></tr> <tr><td>To not Delete Select Cancel below!</td></tr> @Html.HiddenFor(m => m.CmpAdrsSh.Street) @Html.HiddenFor(m => m.CmpAdrsSh.City) @Html.HiddenFor(m => m.CmpAdrsSh.State) @Html.HiddenFor(m => m.CmpAdrsSh.Zip) Controller PartialViewResult Action public PartialViewResult SHIPDEL() { return PartialView("AJAX_Views/_ShipDel"); } I tried adding this.ModelState to the Action but then the view will not render. I'm guessing I somehow have to pass the model to the SHIPDEL Action first. I couldn't find an @Ajax.ActionLink overload that would allow this. public PartialViewResult SHIPDEL() { return PartialView("AJAX_Views/_ShipDel", this.ModelState); } In the _ShipDel Partial View I need to expose the CmpAdrsSh properties so the model validates in the POST Action. The model is empty at this point. How do I pass the existing vM model into the _ShipDel partial view? Thank you,

    Read the article

  • How to use value from primary accessdatasource control as parameter in select query for secondary ac

    - by weedave
    Hi, I'm trying to display all orders placed and I have a primary accessdatasource control that has a select query to get the customer information and the orderID. I want to use the orderID value from this first query as a parameter for the secondary accessdatasource control that selects the product information of the products in the order. In plain english, I want to:- select product info from product table where orderID = ? (where ? is the orderID value from the first query) I have tried the <%#Eval("OrderID")% but I get a "server tag not well formed" error, but I do get results returned when I just type the order ID in, but obviously every result (order) just contains the same product info... <asp:Repeater ID="Repeater1" runat="server" DataSourceID="AccessDataSource1"> <ItemTemplate> <asp:AccessDataSource ID="AccessDataSource2" runat="server" DataFile="~/App_Data/project.mdb" SelectCommand="SELECT orderDetails.OrderID, album.Artist, album.Album, album.Cost, album.ImageURL, orderDetails.Quantity, orderDetails.Total FROM (album INNER JOIN orderDetails ON album.AlbumID = orderDetails.AlbumID) WHERE (orderDetails.OrderID = ? )"> <SelectParameters> // Error is on this line <asp:Parameter Name="OrderID" DefaultValue="<%#Eval ("OrderID")%>" /> </SelectParameters> </asp:AccessDataSource> <div class="viewAllOrdersOrderArea"> <div class="viewAllOrdersOrderSummary"> <p><b>Order ID: </b><%#Eval("OrderID")%></p> <h4>Shipping Details</h4> <p><b>Shipping Address: </b><%#Eval("ShippingName")%>, <%#Eval("ShippingAddress")%>, <%#Eval("ShippingTown")%>, <%#Eval("ShippingPostcode")%></p> <h4>Payment Details</h4> <p><b>Cardholder's Address: </b><%#Eval("CardHolder")%>, <%#Eval("BillingAddress")%>, <%#Eval("BillingTown")%>, <%#Eval("BillingPostcode")%></p> <p><b>Payment Method: </b><%#Eval("CardType")%></p> <p><b>Card Number: </b><%#Eval("CardNumber")%></p> <p><b>Start Date: </b><%#Eval("StartDate")%>, Expiry Date: <%#Eval("ExpiryDate")%></p> <p><b>Security Digits: </b><%#Eval("SecurityDigits")%></p> <h4>Ordered items:</h4> <asp:Repeater ID="Repeater2" runat="server" DataSourceID="AccessDataSource2"> <ItemTemplate> <div style="display: block; float: left;"> <div class="viewAllOrdersProductImage"> <img width="70px" height="70px" alt="<%# Eval("Artist") %> - <%# Eval("Album") %>" src="assets/images/thumbs/<%# Eval("ImageURL") %>" /> </div> <div style="display:block; float:left; padding-top:15px; padding-right:20px;"><p><b><%# Eval("Artist") %> - <%# Eval("Album") %></b></p> <p>£<%# Eval("Cost") %> x <%# Eval("Quantity") %> = £<%#Eval("Total")%></p></div> </div> </ItemTemplate> </asp:Repeater> </div> </div> </ItemTemplate> </asp:Repeater>

    Read the article

  • Windows Azure: Announcing release of Windows Azure SDK 2.2 (with lots of goodies)

    - by ScottGu
    Earlier today I blogged about a big update we made today to Windows Azure, and some of the great new features it provides. Today I’m also excited to also announce the release of the Windows Azure SDK 2.2. Today’s SDK release adds even more great features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter The below post has more details on what’s available in today’s Windows Azure SDK 2.2 release.  Also head over to Channel 9 to see the new episode of the Visual Studio Toolbox show that will be available shortly, and which highlights these features in a video demonstration. Visual Studio 2013 Support Version 2.2 of the Window Azure SDK is the first official version of the SDK to support the final RTM release of Visual Studio 2013. If you installed the 2.1 SDK with the Preview of Visual Studio 2013 we recommend that you upgrade your projects to SDK 2.2.  SDK 2.2 also works side by side with the SDK 2.0 and SDK 2.1 releases on Visual Studio 2012: Integrated Windows Azure Sign In within Visual Studio Integrated Windows Azure Sign-In support within Visual Studio is one of the big improvements added with this Windows Azure SDK release.  Integrated sign-in support enables developers to develop/test/manage Windows Azure resources within Visual Studio without having to download or use management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer inside Visual Studio and choose the “Connect to Windows Azure” context menu option to connect to Windows Azure: Doing this will prompt you to enter the email address of the account you wish to sign-in with: You can use either a Microsoft Account (e.g. Windows Live ID) or an Organizational account (e.g. Active Directory) as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio Server Explorer (and you can start using them): With this new integrated sign in experience you are now able to publish web apps, deploy VMs and cloud services, use Windows Azure diagnostics, and fully interact with your Windows Azure services within Visual Studio without the need for a management certificate.  All of the authentication is handled using the Windows Azure Active Directory associated with your Windows Azure account (details on this can be found in my earlier blog post). Integrating authentication this way end-to-end across the Service Management APIs + Dev Tools + Management Portal + PowerShell automation scripts enables a much more secure and flexible security model within Windows Azure, and makes it much more convenient to securely manage multiple developers + administrators working on a project.  It also allows organizations and enterprises to use the same authentication model that they use for their developers on-premises in the cloud.  It also ensures that employees who leave an organization immediately lose access to their company’s cloud based resources once their Active Directory account is suspended. Filtering/Subscription Management Once you login within Visual Studio, you can filter which Windows Azure subscriptions/regions are visible within the Server Explorer by right-clicking the “Filter Services” context menu within the Server Explorer.  You can also use the “Manage Subscriptions” context menu to mange your Windows Azure Subscriptions: Bringing up the “Manage Subscriptions” dialog allows you to see which accounts you are currently using, as well as which subscriptions are within them: The “Certificates” tab allows you to continue to import and use management certificates to manage Windows Azure resources as well.  We have not removed any functionality with today’s update – all of the existing scenarios that previously supported management certificates within Visual Studio continue to work just fine.  The new integrated sign-in support provided with today’s release is purely additive. Note: the SQL Database node and the Mobile Service node in Server Explorer do not support integrated sign-in at this time. Therefore, you will only see databases and mobile services under those nodes if you have a management certificate to authorize access to them.  We will enable them with integrated sign-in in a future update. Remote Debugging Cloud Resources within Visual Studio Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service: Remote Debugging of Cloud Services To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard: Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox: Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code: Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it: Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud. Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly. Firewall Management Support with SQL Databases By default we enable a security firewall around SQL Databases hosted within Windows Azure.  This ensures that only your application (or IP addresses you approve) can connect to them and helps make your infrastructure secure by default.  This is great for protection at runtime, but can sometimes be a pain at development time (since by default you can’t connect/manage the database remotely within Visual Studio if the security firewall blocks your instance of VS from connecting to it). One of the cool features we’ve added with today’s release is support that makes it easy to enable and configure the security firewall directly within Visual Studio.  Now with the SDK 2.2 release, when you try and connect to a SQL Database using the Visual Studio Server Explorer, and a firewall rule prevents access to the database from your machine, you will be prompted to add a firewall rule to enable access from your local IP address: You can simply click Add Firewall Rule and a new rule will be automatically added for you. In some cases, the logic to detect your local IP may not be sufficient (for example: you are behind a corporate firewall that uses a range of IP addresses) and you may need to set up a firewall rule for a range of IP addresses in order to gain access. The new Add Firewall Rule dialog also makes this easy to do.  Once connected you’ll be able to manage your SQL Database directly within the Visual Studio Server Explorer: This makes it much easier to work with databases in the cloud. Visual Studio 2013 RTM Virtual Machine Images Available for MSDN Subscribers Last week we released the General Availability Release of Visual Studio 2013 to the web.  This is an awesome release with a ton of new features. With today’s Windows Azure update we now have a set of pre-configured VM images of VS 2013 available within the Windows Azure Management Portal for use by MSDN customers.  This enables you to create a VM in the cloud with VS 2013 pre-installed on it in with only a few clicks: Windows Azure now provides the fastest and easiest way to get started doing development with Visual Studio 2013. Windows Azure Management Libraries for .NET (Preview) Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments. Today we are releasing a preview of a new set of Windows Azure Management Libraries for .NET.  These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API. Modern .NET Developer Experience We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today: Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction) Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning Support async/await task based asynchrony (with easy sync overloads) Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc. Factored for easy testability and mocking Built on top of popular libraries like HttpClient and Json.NET Below is a list of a few of the management client classes that are shipping with today’s initial preview release: .NET Class Name Supports Operations for these Assets (and potentially more) ManagementClient Locations Credentials Subscriptions Certificates ComputeManagementClient Hosted Services Deployments Virtual Machines Virtual Machine Images & Disks StorageManagementClient Storage Accounts WebSiteManagementClient Web Sites Web Site Publish Profiles Usage Metrics Repositories VirtualNetworkManagementClient Networks Gateways Automating Creating a Virtual Machine using .NET Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides. We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within: We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall: We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in: Once everything has been set up the call to create the virtual machine is executed asynchronously In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use: Preview Availability via NuGet The Windows Azure Management Libraries for .NET are now available via NuGet. Because they are still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the packages. The Package Manager Console screen shot below demonstrates how to get the entire set of libraries to manage your Windows Azure assets: You can also install them within your .NET projects by right clicking on the VS Solution Explorer and using the Manage NuGet Packages context menu command.  Make sure to select the “Include Prerelease” drop-down for them to show up, and then you can install the specific management libraries you need for your particular scenarios: Open Source License The new Windows Azure Management Libraries for .NET make it super easy to automate management operations within Windows Azure – whether they are for Virtual Machines, Cloud Services, Storage Accounts, Web Sites, and more.  Like the rest of the Windows Azure SDK, we are releasing the source code under an open source (Apache 2) license and it is hosted at https://github.com/WindowsAzure/azure-sdk-for-net/tree/master/libraries if you wish to contribute. PowerShell Enhancements and our New Script Center Today, we are also shipping Windows Azure PowerShell 0.7.0 (which is a separate download). You can find the full change log here. Here are some of the improvements provided with it: Windows Azure Active Directory authentication support Script Center providing many sample scripts to automate common tasks on Windows Azure New cmdlets for Media Services and SQL Database Script Center Windows Azure enables you to script and automate a lot of tasks using PowerShell.  People often ask for more pre-built samples of common scenarios so that they can use them to learn and tweak/customize. With this in mind, we are excited to introduce a new Script Center that we are launching for Windows Azure. You can learn about how to scripting with Windows Azure with a get started article. You can then find many sample scripts across different solutions, including infrastructure, data management, web, and more: All of the sample scripts are hosted on TechNet with links from the Windows Azure Script Center. Each script is complete with good code comments, detailed descriptions, and examples of usage. Summary Visual Studio 2013 and the Windows Azure SDK 2.2 make it easier than ever to get started developing rich cloud applications. Along with the Windows Azure Developer Center’s growing set of .NET developer resources to guide your development efforts, today’s Windows Azure SDK 2.2 release should make your development experience more enjoyable and efficient. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >