Search Results

Search found 14532 results on 582 pages for 'dynamic types'.

Page 156/582 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Logical Domain Modeling Made Simple

    - by Knut Vatsendvik
    How can logical domain modeling be made simple and collaborative? Many non-technical end-users, managers and business domain experts find it difficult to understand the visual models offered by many UML tools. This creates trouble in capturing and verifying the information that goes into a logical domain model. The tools are also too advanced and complex for a non-technical user to learn and use. We have therefore, in our current project, ended up with using Confluence as tool for designing the logical domain model with the help of a few very useful plugins. Big thanks to Ole Nymoen and Per Spilling for their expertise in this field that made this posting possible. Confluence Plugins Here is a list of Confluence plugins used in this solution. Install these before trying out the macros used below. Plugin Description Copy Space Allows a space administrator to copy a space, including the pages within the space Metadata Supports adding metadata to Wiki pages Label Manages labeling of pages Linking Contains macros for linking to templates, the dashboard and other Table Enhances the table capability in Confluence Creating a Confluence Space First we need to create a new confluence space for the domain model. Click the link Create a Space located below the list of spaces on the Dashboard. Please contact your Confluence administrator is you do not have permissions to do this.   For illustrative purpose all attributes and entities in this posting are based on my imaginary project manager domain model. When a logical domain model is good enough for being implemented, do a copy of the Confluence Space (see Copy Space plugin). In this way you create a stable version of the logical domain model while further design can continue with the new copied space. Typical will the implementation phase result in a database design and/or a XSD schema design. Add Space Templates Go to the Home page of your Confluence Space. Navigate to the Browse drop-down menu and click on Advanced. Then click the Templates option in the left navigation panel. Click Add New Space Template to add the following three templates. Name: attribute {metadata-list} || Name | | || Type | | || Format | | || Description | | {metadata-list} {add-label:attribute} Name: primary-type {metadata-list} || Name | || || Type | || || Format | || || Description | || {metadata-list} {add-label:primary-type} Name: complex-type {metadata-list} || Name | || || Description |  || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | {add-label:complex-type,entity} The metadata-list macro (see Metadata plugin) will save a list of metadata values to the page. The add-label macro (see Label plugin) will automatically label the page. Primary Types Page Our first page to add will act as container for our primary types. Switch to Wiki markup when adding the following content to the page. | (+) {add-page:template=primary-type|parent=@self}Add new primary type{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|root=@self|pages=@descendents} Once the page is created, click the Add new primary type (create-page macro) to start creating a new pages. Here is an example of input to the LocalDate page. Embrace the LocalDate with square brackets [] to make the page linkable. Again switch to Wiki markup before editing. {metadata-list} || Name | [LocalDate] || || Type | Date || || Format | YYYY-MM-DD || || Description | Date in local time zone. YYYY = year, MM = month and DD = day || {metadata-list} {add-label:primary-type} The metadata-report macro will show a tabular report of all child pages.   Attributes Page The next page will act as container for all of our attributes. | (+) {add-page:template=attribute|parent=@self|title=attribute}Add new attribute{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|pages=@descendants} Here is an example of input to the startDate page. {metadata-list} || Name | [startDate] || || Type | [LocalDate] || || Format | {metadata-from:LocalDate|Format} || || Description | The projects start date || {metadata-list} {add-label:attribute} Using the metadata-from macro we fetch the text from the previously created LocalDate page. Complex Types Page The last page in this example shows how attributes can be combined together to form more complex types.   h3. Intro Overview of complex types in the domain model. | (+) {add-page:template=complex-type|parent=@self}Add a new complex type{add-page}\\ | {metadata-report:Name,Description|sort=Name|root=@self|pages=@descendents} Here is an example of input to the ProjectType page. {metadata-list} || Name | [ProjectType] || || Description | Represents a project || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [projectId] | {metadata-from:projectId|Type} | {metadata-from:projectId|Format} | {metadata-from:projectId|Description} | | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | | [description] | {metadata-from:description|Type} | {metadata-from:description|Format} | {metadata-from:description|Description} | | [startDate] | {metadata-from:startDate|Type} | {metadata-from:startDate|Format} | {metadata-from:startDate|Description} | {add-label:complex-type,entity} Gives us this Conclusion Using a web-based corporate Wiki like Confluence to create a logical domain model increases the collaboration between people with different roles in the enterprise. It’s my believe that this helps the domain model to be more accurate, and better documented. In our real project we have more pages than illustrated here to complete the documentation. We do also still use UML tools to create different types of diagrams that Confluence do not support. As a last tip, an ImageMap plugin can make those diagrams clickable when used in pages. Enjoy!

    Read the article

  • Can a loosely typed language be considered true object oriented?

    - by user61852
    Can a loosely typed programming language like PHP be really considered object oriented? I mean, the methods don't have returning types and method parameters has no declared type either. Doesn't class design require methods to have a return type? Don't methods signatures have specifically-typed parameters? How can OOP techniques help you code in PHP if you always have to check the types of parameters received because the language doesn't enforce types? Please, if I'm wrong, explain it to me. When you design things using UML, then code classes in PHP with no return-typed methods and no-type parameters... Is the code really compliant with the UML design? You spend time designing the architecture of your software, then the compiler doesn't force the programmer to follow your design while coding, letting he/she assign any object variable to any other variable with no "type-mismatch" warning.

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • Which will be faster? Switching shaders or ignore that some cases don't need full code?

    - by PolGraphic
    I have two types of 2d objects: In first case (for about 70% of objects), I need that code in the shader: float2 texCoord = input.TexCoord + textureCoord.xy But in the second case I have to use: float2 texCoord = fmod(input.TexCoord, texCoordM.xy - textureCoord.xy) + textureCoord.xy I can use second code also for first case, but it will be a little slower (fmod is useless here, input.TexCoord will be always lower than textureCoord.xy - textureCoord.xy for sure). My question is, which way will be faster: Making two independent shaders for both types of rectangles, group rectangles by types and switch shaders during rendering. Make one shader and use some if statement. Make one shader and ignore that sometimes (70% of cases) I don't need to use fmod.

    Read the article

  • Interface hierarchy design for separate domains

    - by jerzi
    There are businesses and people. People could be liked and businesses could be commented on: class Like class Comment class Person implements iLikeTarget class Business implements iCommentTarget Likes and comments are performed by a user(person) so they are authored: class Like implements iAuthored class Comment implements iAuthored People's like could also be used in their history: class history class Like implements iAuthored, iHistoryTarget Now, a smart developer comes and says each history is attached to a user so history should be authored: interface iHistoryTarget extends iAuthored so it could be removed from class Like: class Person implements iLikeTarget class Business implements iCommentTarget class Like implements iHistoryTarget class Comment implements iAuthored class history interface iHistoryTarget extends iAuthored Here, another smart guy comes with a question: How could I capture the Authored fact in Like and Comment classes? He may knows nothing about history concept in the project. By scalling these kind of functionallities, interfaces may goes to their encapsulated types which cause more type strength, on the other hand explicitness suffered and also code end users will face much pain to process. So here is the question: Should I encapsulate those dependant types to their parent types (interface hierarchies) or not or explicitly repeat each type for every single level of my type system or ...?

    Read the article

  • Which MIME type to compress? and what If I omit the `type` attribute from the HTML?

    - by rockyraw
    Per my request, my webhost had turned mod_deflate ON. In my Cpanel I now have an "Optimize Website" button. Inside that menu I could either choose: "Compress all content" or "Compress the specified MIME types" with the following default MIME types: "text/html text/plain text/xml" Which option should I choose and why? If I choose option 2, which types should I add (is there a recommended list with the exact way they should be written)? According to Google recommendations, I have omitted the type="text/css" attributes from all CSS references, as well as the type="text/javascript" attributes from all script references. Would this hinder the "gzipping" process?

    Read the article

  • Which events specifically cause Windows 2008 to mark a SAN volume offline?

    - by Jeremy
    I am searching for specific criteria/events that will cause Windows 2008 to mark a SAN volume as offline in disk management, even though it is connected to that SAN volume via FC or iSCSI. Microsoft states that "A dynamic disk may become Offline if it is corrupted or intermittently unavailable. A dynamic disk may also become Offline if you attempt to import a foreign (dynamic) disk and the import fails. An error icon appears on the Offline disk. Only dynamic disks display the Missing or Offline status." I am specifically wondering if, on the SAN, changing the path to the disk (such as the disk being presented to the host via a different iSCSI target IQN or a different LUN #) would cause a volume to be offlined in disk management. Thanks! Edit: I have already found two reasons why a disk might be set offline, disk signature collisions and the SAN disk policy. Bounty would be awarded to someone who can find further documented reasons related to changes in the volume's path. Disk signature collisions: http://blogs.technet.com/b/markrussinovich/archive/2011/11/08/3463572.aspx SAN disk policy: http://jeffwouters.nl/index.php/2011/06/disk-offline-with-error-the-disk-is-offline-because-of-a-policy-set-by-an-administrator/

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Should I use nginx exclusively, or have it as a proxy to Tomcat (performance related)?

    - by Kevin
    I've planned to create a website that'll be pretty heavy on dynamic content, and want to know what would be the wisest choice for part of my webstack. Right now I'm trying to decide whether I should develop upon nginx, using PHP to deliver the dynamic content, or use nginx as a proxy to Tomcat and use servlets to deliver the dynamic content. I have a good amount of experience with Java, JSP, and servlets, so that's a plus right off the bat. Also, since it is a compiled language, it will execute faster than PHP (it is implied here that Java is around 37x faster than PHP) , and will create the web pages faster. I have no experience with PHP, however i'm under the impression that it is easy to pick up. It's slower than Java, but since the client will only be communicating with nginx, I'm thinking that serving the dynamically created web pages to the client will be faster this way. Considering these things, i'd like to know: Are my assumptions correct? Where does the bottleneck occur: creating pages or serving them back to the client? Will proxying Tomcat with nginx give me any of nginx performance benefits if I'm going to be using Tomcat to generate the dynamic content (keeping in mind my site is going to be heavy in this aspect)? I don't mind learning PHP if, in the end, its going to give me the best performance. I just want to know what would be the best choice from that standpoint.

    Read the article

  • Use Drive Mirroring for Instant Backup in Windows 7

    - by Trevor Bekolay
    Even with the best backup solution, a hard drive crash means you’ll lose a few hours of work. By enabling drive mirroring in Windows 7, you’ll always have an up-to-date copy of your data. Windows 7’s mirroring – which is only available in Professional, Enterprise, and Ultimate editions – is a software implementation of RAID 1, which means that two or more disks are holding the exact same data. The files are constantly kept in sync, so that if one of the disks fails, you won’t lose any data. Note that mirroring is not technically a backup solution, because if you accidentally delete a file, it’s gone from both hard disks (though you may be able to recover the file). As an additional caveat, having mirrored disks requires changing them to “dynamic disks,” which can only be read within modern versions of Windows (you may have problems working with a dynamic disk in other operating systems or in older versions of Windows). See this Wikipedia page for more information. You will need at least one empty disk to set up disk mirroring. We’ll show you how to mirror an existing disk (of equal or lesser size) without losing any data on the mirrored drive, and how to set up two empty disks as mirrored copies from the get-go. Mirroring an Existing Drive Click on the start button and type partitions in the search box. Click on the Create and format hard disk partitions entry that shows up. Alternatively, if you’ve disabled the search box, press Win+R to open the Run window and type in: diskmgmt.msc The Disk Management window will appear. We’ve got a small disk, labeled OldData, that we want to mirror in a second disk of the same size. Note: The disk that you will use to mirror the existing disk must be unallocated. If it is not, then right-click on it and select Delete Volume… to mark it as unallocated. This will destroy any data on that drive. Right-click on the existing disk that you want to mirror. Select Add Mirror…. Select the disk that you want to use to mirror the existing disk’s data and press Add Mirror. You will be warned that this process will change the existing disk from basic to dynamic. Note that this process will not delete any data on the disk! The new disk will be marked as a mirror, and it will starting copying data from the existing drive to the new one. Eventually the drives will be synced up (it can take a while), and any data added to the E: drive will exist on both physical hard drives. Setting Up Two New Drives as Mirrored If you have two new equal-sized drives, you can format them to be mirrored copies of each other from the get-go. Open the Disk Management window as described above. Make sure that the drives are unallocated. If they’re not, and you don’t need the data on either of them, right-click and select Delete volume…. Right-click on one of the unallocated drives and select New Mirrored Volume…. A wizard will pop up. Click Next. Click on the drives you want to hold the mirrored data and click Add. Note that you can add any number of drives. Click Next. Assign it a drive letter that makes sense, and then click Next. You’re limited to using the NTFS file system for mirrored drives, so enter a volume label, enable compression if you want, and then click Next. Click Finish to start formatting the drives. You will be warned that the new drives will be converted to dynamic disks. And that’s it! You now have two mirrored drives. Any files added to E: will reside on both physical disks, in case something happens to one of them. Conclusion While the switch from basic to dynamic disks can be a problem for people who dual-boot into another operating system, setting up drive mirroring is an easy way to make sure that your data can be recovered in case of a hard drive crash. Of course, even with drive mirroring, we advocate regular backups to external drives or online backup services. Similar Articles Productive Geek Tips Rebit Backup Software [Review]Disabling Instant Search in Outlook 2007Restore Files from Backups on Windows Home ServerSecond Copy 7 [Review]Backup Windows Home Server Folders to an External Hard Drive TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Dynamic Multiple Choice (Like a Wizard) - How would you design it? (e.g. Schema, AI model, etc.)

    - by henry74
    This question can probably be broken up into multiple questions, but here goes... In essence, I'd like to allow users to type in what they would like to do and provide a wizard-like interface to ask for information which is missing to complete a requested query. For example, let's say a user types: "What is the weather like in Springfield?" We recognize the user is interested in weather, but it could be Springfield, Il or Springfield in another state. A follow-up question would be: What Springfield did you want weather for? 1 - Springfield, Il 2 - Springfield, Wi You can probably think of a million examples where a request is missing key data or its ambiguous. Make the assumption the gist of what the user wants can be understood, but there are missing pieces of data required to complete the request. Perhaps you can take it as far back as asking what the user wants to do and "leading" them to a query. This is not AI in the sense of taking any input and truly understanding it. I'm not referring to having some way to hold a conversation with a user. It's about inferring what a user wants, checking to see if there is an applicable service to be provided, identifying the inputs needed and overlaying that on top of what's missing from the request, then asking the user for the remaining information. That's it! :-) How would you want to store the information about services? How would you go about determining what was missing from the input data? My thoughts: Use regex expressions to identify clear pieces of information. These will be matched to the parameters of a service. Figure out which parameters do not have matching data and look up the associated question for those parameters. Ask those questions and capture answers. Re-run the service passing in the newly captured data. These would be more free-form questions. For multiple choice, identify the ambiguity and search for potential matches ranked in order of likelihood (add in user history/preferences to help decide). Provide the top 3 as choices. Thoughts appreciated. Cheers, Henry

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • Select videos using UIImagePickerController in 2G/3G

    - by Raj
    Hi, I am facing a problem where-in I cannot select videos from the photo album in iPhone 2G/3G device. The default photos application does show videos and is capable of playing them, which in turn means that UIImagePickerController should clearly be capable of showing videos in photo album and selecting them. I have coded this to determine whether the device is capable of snapping a photo, recording video, selecting photos and selecting videos: // Check if camera and video recording are available: [self setCameraAvailable:NO]; [self setVideoRecordingAvailable:NO]; [self setPhotoSelectionAvailable:NO]; [self setVideoSelectionAvailable:NO]; // For live mode: NSArray *availableTypes = [UIImagePickerController availableMediaTypesForSourceType:UIImagePickerControllerSourceTypeCamera]; NSLog(@"Available types for source as camera = %@", availableTypes); if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { if ([availableTypes containsObject:(NSString*)kUTTypeMovie]) [self setVideoRecordingAvailable:YES]; if ([availableTypes containsObject:(NSString*)kUTTypeImage]) [self setCameraAvailable:YES]; } // For photo library mode: availableTypes = [UIImagePickerController availableMediaTypesForSourceType:UIImagePickerControllerSourceTypePhotoLibrary]; NSLog(@"Available types for source as photo library = %@", availableTypes); if ([availableTypes containsObject:(NSString*)kUTTypeImage]) [self setPhotoSelectionAvailable:YES]; if ([availableTypes containsObject:(NSString*)kUTTypeMovie]) [self setVideoSelectionAvailable:YES]; The resulting logs for 3G device is as follows: 2010-05-03 19:09:09.623 xyz [348:207] Available types for source as camera = ( "public.image" ) 2010-05-03 19:09:09.643 xyz [348:207] Available types for source as photo library = ( "public.image" ) As the logs state, for photo library the string equivalent of kUTTypeMovie is not available and hence the UIImagePickerController does not show up (or rather throws exception if we set the source types array which includes kUTTypeMovie) the movie files in photo library. I havent tested for 3GS, but I am sure that this problem does not exist in it with reference to other threads. I have built the app for both 3.0 (base SDK) and 3.1 but with the same results. This issue is already discussed in the thread: http://www.iphonedevsdk.com/forum/iphone-sdk-development/36197-uiimagepickercontroller-does-not-show-movies-albums.html But it does not seem to host a solution. Any solutions to this problem? Thanks and Regards, Raj Pawan

    Read the article

  • determine complex type from a primitive type using reflection

    - by Nilotpal Das
    I am writing a tool where I need to reflect upon methods and if the parameters of the methods are complex type, then I need to certain type of actions such as instantiating them etc. Now I saw the IsPrimitive property in the Type variable. However, it shows string and decimal as complex types, which technically isn't incorrect. However what I really want is to be able to distinguish developer created class types from system defined data types. Is there any way that I can do this?

    Read the article

  • Android NDK r4 san-angeles problem

    - by Goz
    Hi All, I'm starting to learn the android NDK and I've instantly come up against a problem. I'e built the tool chain (which took a LOT longer than I was expecting!!) and I've compiled the C++ code with no problems and now I'm trying to build the java code. Instantly I come up against a problem. There is a file "main.xml" <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="match_parent" android:layout_height="match_parent" > <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Hello World, DemoActivity" /> </LinearLayout> and I get the following errors: Description Resource Path Location Type error: Error: String types not allowed (at 'layout_height' with value 'match_parent'). main.xml /DemoActivity/res/layout line 2 Android AAPT Problem error: Error: String types not allowed (at 'layout_height' with value 'match_parent'). main.xml /DemoActivity/res/layout line 2 Android AAPT Problem error: Error: String types not allowed (at 'layout_width' with value 'match_parent'). main.xml /DemoActivity/res/layout line 2 Android AAPT Problem error: Error: String types not allowed (at 'layout_width' with value 'match_parent'). main.xml /DemoActivity/res/layout line 7 Android AAPT Problem error: Error: String types not allowed (at 'layout_width' with value 'match_parent'). main.xml /DemoActivity/res/layout line 7 Android AAPT Problem So I can see the problem lies in the fact that these "match_parent" strings are in there. Anyone know how to fix this?

    Read the article

  • .htaccess allow from hostname?

    - by Mikey B
    Ubuntu 9.10 Apache2 Hi Guys, Long story short, I need to restrict access to a certain part of my web site based on a dynamic IP source address that changes every now and then. Historically, I've just added the following to htaccess... order deny,allow deny from all # allow my dynamic IP address allow from <dynamic ip> But the problem is that I'll have to manually make this change every time the IP changes. Ideally I'd like to specify a hostname instead... something like: order deny,allow deny from all # allow my host allow from hostname.whatever.local That doesn't seemed to have worked though. I get an error 403 - access forbidden. Does .htaccess not support hostnames?

    Read the article

  • serializing type definitions?

    - by Dave
    I'm not positive I'm going about this the right way. I've got a suite of applications that have varying types of output (custom defined types). For example, I might have a type called Widget: Class Widget Public name as String End Class Throughout the course of operation, when a user experiences a certain condition, the application will take that output instance of widget that user received, serialize it, and log it to the database noting the name of the type. Now, I have other applications that do something similar, but instead of dealing with Widget, it could be some totally random other type with different attributes, but again I serialize the instance, log it to the db, and note the name of the type. I have maybe a half dozen different types and don't anticipate too many additional ones in the future. After all this is said and done, I have an admin interface that looks through these logs, and has the ability for the user to view the contents of this data thats been logged. The Admin app has a reference to all the types involved, and with some basic switch case logic hinged upon the name of the type, will cast it into their original types, and pass it on to some handlers that have basic display logic to spit the data back out in a readable format (one display handler for each type) NOW... all this is well and good... Until one day, my model changed. The Widget class now has deprecated the name attribute and added on a bunch of other attributes. I will of course get type mismatches in the admin side when I try to reconstitute this data. I was wondering if there was some way, at runtime, i could perhaps reflect through my code and get a snapshot of the type definition at that precise moment, serialize it, and store it along with the data so that I could somehow use this to reconstitute it in the future?

    Read the article

  • Is it possible to share a C struct in shared memory between apps compiled with different compilers?

    - by Joseph Garvin
    I realize that in general the C and C++ standards gives compiler writers a lot of latitude. But in particular it guarantees that POD types like C struct members have to be laid out in memory the same order that they're listed in the structs definition, and most compilers provide extensions letting you fix the alignment of members. So if you had a header that defined a struct and manually specified the alignment of its members, then compiled two apps with different compilers using the header, shouldn't one app be able to write an instance of the struct into shared memory and the other app be able to read it without errors? I am assuming though that the size of the types contained is consistent across two compilers on the same architecture (it has to be the same platform already since we're talking about shared memory). I realize that this is not always true for some types (e.g. long vs. long long in GCC and MSVC 64-bit) but nowadays there are uint16_t, uint32_t, etc. types, and float and double are specified by IEEE standards.

    Read the article

  • Is .NET support for Win32 Code Interop?

    - by Usman
    Hello, I need to InterOp Win32 code (unmanaged Win32 DLL's and Exe) completely with .NET. I need to call Win32 unmanaged code(DLL exported functions) at runtime i.e (knowing the types of data types in Win32 signatures and need to pass data according to that type at runtime). This is 100% possible in case of COM. You can convert COM unmanaged code to managed assemblies using tlbimp.exe and use now reflection API to work with those managed types(actual were unmanaged types now converted managed using tlbimp). But same functionality I need to get in terms of Win32(i.e) in .NET framework. How?? I know MS provided Export table reading API ..but I couldn't find exact API for InterOp of Win32 unmanaged code Regards

    Read the article

  • django error:The model Tribe is already registered

    - by zjm1126
    when i python manage.py syncdb,it show this: The following content types are stale and need to be deleted: maps | tribe Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel: no when i put 'no' ,and then python manage runserver: AlreadyRegistered at / The model Tribe is already registered what should i do ? thanks

    Read the article

  • Automatic type conversion in Java?

    - by davr
    Is there a way to do automatic implicit type conversion in Java? For example, say I have two types, 'FooSet' and 'BarSet' which both are representations of a Set. It is easy to convert between the types, such that I have written two utility methods: /** Given a BarSet, returns a FooSet */ public FooSet barTOfoo(BarSet input) { /* ... */ } /** Given a FooSet, returns a BarSet */ public BarSet fooTObar(FooSet input) { /* ... */ } Now say there's a method like this that I want to call: public void doSomething(FooSet data) { /* .. */ } But all I have is a BarSet myBarSet...it means extra typing, like: doSomething(barTOfoo(myBarSet)); Is there a way to tell the compiler that certain types can automatically be cast to other types? I know this is possible in C++ with overloading, but I can't find a way in Java. I want to just be able to type: doSomething(myBarSet); And the compiler knows to automatically call barTOfoo()

    Read the article

  • Windows 2003 DNS updates from ISC DHCP server

    - by wolfgangsz
    We have a very mixed network, with most clients being Debian Lenny, the rest Windows XP/Vista/7. The network itself is split into two segments (for technical reasons) called "corporate" and "engineering". On the "corporate" side all clients get their IP addresses from a Windows DHCP server and the dynamic updates into the Windows DNS work just fine. On the "engineering" side, clients get their IP addresses from a linux machine running the standard ISC DHCP server. Although this server is configured to do dynamic DNS updates, they actually don't work. Anybody got any advice on how to fix this? Please note: dynamic updates from the clients directly into the DNS would work, but are not an option for us. So this is strictly on how make this work from an ISC DHCP server to a Windows DNS server.

    Read the article

  • Relay thru external SMTP server on Exchange 2010

    - by MadBoy
    My client has dynamic IP on which he hosts Exchange 2010 with POP3 Connector running and gathering emails from his current hosting. Until he gets static IP he wants to send emails out. This will work most of the time but some servers won't accept such email sent by Exchange (from dynamic ip due to multiple reasons) so I would like to make a relay thru external SMTP server which hosts current mailboxes. Normally SMTP server could be set up to allow relay thru it but this would require static IP to be allowed on that server so it would know which IP is allowed to relay thru it. Or is there a way to setup relay in Exchange 2010 so it can use dynamic IP and kinda authenticates with user/password itself on the hosted server?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >