Search Results

Search found 21005 results on 841 pages for 'disk format'.

Page 166/841 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Very different font sizes across browsers

    - by Yang
    Chrome/WebKit and Firefox have different rendering engines which render fonts differently, in particular with differing dimensions. This isn't too surprising, but what's surprising is the magnitude of some of the differences. I can always tweak individual elements on a page to be more similar, but that's tedious, to say the least. I've been searching for more systematic solutions, but many resources (e.g. SO answers) simply say "use a reset package." While I'm sure this fixes a bunch of other things like padding and spacing, it doesn't seem to make any difference for font dimensions. For instance, if I take the reset package from http://html5reset.org/, I can show pretty big differences (note the layout dimensions shown in the inspectors). [The images below are actually higher res than shown/resized in this answer.] <h1 style="font-size:64px; background-color: #eee;">Article Header</h1> With Helvetica, Chrome is has the shorter height instead. <h1 style="font-size:64px; background-color: #eee; font-family: Helvetica">Article Header</h1> Using a different font, Chrome again renders a much taller font, but additionally the letter spacing goes haywire (probably due to the boldification of the font): <style> @font-face { font-family: "MyriadProRegular"; src: url("fonts/myriadpro-regular-webfont.eot"); src: local("?"), url("fonts/myriadpro-regular-webfont.woff") format("woff"), url("fonts/myriadpro-regular-webfont.ttf") format("truetype"), url("fonts/myriadpro-regular-webfont.svg#webfonteknRmz0m") format("svg"); font-weight: normal; font-style: normal; } @font-face { font-family: "MyriadProLight"; src: url("fonts/myriadpro-light-webfont.eot"); src: local("?"), url("fonts/myriadpro-light-webfont.woff") format("woff"), url("fonts/myriadpro-light-webfont.ttf") format("truetype"), url("fonts/myriadpro-light-webfont.svg#webfont2SBUkD9p") format("svg"); font-weight: normal; font-style: normal; } @font-face { font-family: "MyriadProSemibold"; src: url("fonts/myriadpro-semibold-webfont.eot"); src: local("?"), url("fonts/myriadpro-semibold-webfont.woff") format("woff"), url("fonts/myriadpro-semibold-webfont.ttf") format("truetype"), url("fonts/myriadpro-semibold-webfont.svg#webfontM3ufnW4Z") format("svg"); font-weight: normal; font-style: normal; } </style> ... <h1 style="font-size:64px; background-color: #eee; font-family: Helvetica">Article Header</h1> I've tried a few resets/normalize packages to no avail. I just wanted to confirm here that this is indeed a fact of life (even omitting the more glaring offenders like IE and mobile) and I'm not missing some super-awesome solution to this mess.

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • How to use jQuery datepicker as a control parameter for SqlDataSource?

    - by Matt
    I have the need to display a date in this format: dd/mm/yyyy. This is actually being stored in an ASP.NET textbox and being used as a control parameter for a select on the GridView. When the query is run, though, the date format should change to 'd M y' (for Oracle). It is not working. Can someone tell me what I'm doing wrong? Right now I am pushing the "new" format to a invisible label and using the label as my control param: $(document).ready(function() { //datepicker for query, shown traditionally but holding an Oracle-needed format $('[id$=txtBeginDate]').datepicker({ minDate: -7 , altFormat: 'd M y' }); //get alt format var altFormat = $('[id$=txtBeginDate]').datepicker("option", "altFormat"); //set date to be altformat $('[id$=lblActualDate]').datepicker("option", "altFormat", 'd M y'); });

    Read the article

  • Improve the Quality of ePub eBooks with Sigil

    - by Matthew Guay
    Would you like to correct errors in your ePub formatted eBooks, or even split them into chapters and create a Table of Contents?  Here’s how you can with the free program Sigil. eBooks are increasingly popular with the rise of eBook readers and reading apps on mobile devices.  We recently showed you how to convert a PDF eBook to ePub format, but as you may have noticed, sometimes the converted file had some glitches or odd formatting.  Additionally, many of the many free ePub books available online from sources like the Project Guttenberg do not include a table of contents.  Sigil is a free application for Windows, OS X, and Linux that lets you edit ePub files, so let’s look at how you can use it to improve your eBooks. Note: Sigil took several moments to open files in our tests, and froze momentarily when we maximized the window.  Sigil is currently pre-release software in active development, so we would expect the bugs to be worked out in future versions.  As usual, only install if you’re comfortable testing pre-release software. Getting Started Download Sigil (link below), making sure to select the correct version for your computer.  Run the installer, and select your preferred setup language when prompted. After a moment the installer will appear; setup as normal. Launch Sigil when it’s finished installing.  It opens with a default blank ePub file, so you could actually start writing an eBook from scratch right here. Edit Your ePub eBooks Now you’re ready to edit your ePub books.  Click Open and browse to the file you want to edit. Now you can double-click any of the HTML or XHTML files on the left sidebar and edit them just like you would in Word. Or you can choose to view it in Code View and edit the actual HTML directly. The sidebar also gives you access to the other parts of the ePub file, such as Images and CSS styles. If your ePub file has a Table of Contents, you can edit it with Sigil as well.  Click Tools in the menu bar, and then select TOC Editor.  Strangely there is no way to create a new table of contents, but you can remove entries from existing one.   Convert TXT Files to ePub Many free eBooks online, especially older, out of copyright titles, are available in plain text format.  One problem with these files is that they usually use hard returns at the end of lines, so they don’t reflow to fill your screen efficiently. Sigil can easily convert these files to the more useful ePub format.  Open the text file in Sigil, and it will automatically reflow the text and convert it ePub.  As you can see in the screenshot below, the text in the eBook does not have hard line-breaks at the end of each line, and will be much more readable on mobile devices. Note that Sigil may take several moments opening the book, and may even become unresponsive while analyzing it.   Now you can edit your eBook, split it into chapters, or just save it as is.  Either way, make sure to select Save as to save your book as ePub format. Conclusion As mentioned before, Sigil seems to run slow at times, especially when editing large eBooks.  But it’s still a nice solution to edit and extend your ePub eBooks, and even convert plain text eBooks to the nicer ePub format.  Now you can make your eBooks work just like you want, and read them on your favorite device! If you feel comfortable editing HTML files, check out our article on how to edit ePub eBooks with your favorite HTML editor. Link Download Sigil from Google Code Download free ePub eBooks from Project Guttenberg Similar Articles Productive Geek Tips Edit ePub eBooks with Your Favorite HTML EditorConvert a PDF eBook to ePub Format for Your iPad, iPhone, or eReaderRead Mobi eBooks on Kindle for PCFriday Fun: Watch HD Video Content with MeevidPreview and Purchase Ebooks with Kindle for PC TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account

    Read the article

  • will_paginate undefined method error - Ruby on Rails

    - by bgadoci
    I just installed the gem for will_paginate and it says that it was installed successfully. I followed all the instructions listed with the plugin and I am getting an 'undefined method `paginate' for' error. Can't find much in the way of Google search and haven't been able to fix it myself (obviously). Here is the code: PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts = Post.paginate :page => params[:page], :per_page => 50 respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end /model/post.rb class Post < ActiveRecord::Base validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy cattr_reader :per_page @@per_page = 10 end /posts/views/index.html.erb <%= will_paginate @posts %>

    Read the article

  • How do I get a mp3 file's total time in Java?

    - by Tom Brito
    The answers provided in How do I get a sound file’s total time in Java? work well for wav files, but not for mp3 files. They are (given a file): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file); AudioFormat format = audioInputStream.getFormat(); long frames = audioInputStream.getFrameLength(); double durationInSeconds = (frames+0.0) / format.getFrameRate(); and: AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file); AudioFormat format = audioInputStream.getFormat(); long audioFileLength = file.length(); int frameSize = format.getFrameSize(); float frameRate = format.getFrameRate(); float durationInSeconds = (audioFileLength / (frameSize * frameRate)); They give the same correct result for wav files, but wrong and different results for mp3 files. Any idea what do I have to do to get the mp3 file's duration?

    Read the article

  • How to list TODO: in Ant build output

    - by C. Ross
    Related: How to use ant to check for tags (TODO: etc) in java source How can I get ant to list TODO: tags found in my code in the build output when I run it. I would like build failure to be optional (ie: a setting) if they are found. I've tried Checkstyle as suggested in the related post, but it doesn't display the text of the TODO:. IE: [checkstyle] .../src/Game.java:36: warning: Comment matches to-do format 'TODO:'. [checkstyle] .../src/Game.java:41: warning: Comment matches to-do format 'TODO:'. [checkstyle] .../src/GameThread.java:25: warning: Comment matches to-do format 'TODO:'. [checkstyle] .../src/GameThread.java:30: warning: Comment matches to-do format 'TODO:'. [checkstyle] .../src/GameThread.java:44: warning: Comment matches to-do format 'TODO:'.

    Read the article

  • Availability Best Practices on Oracle VM Server for SPARC

    - by jsavit
    This is the first of a series of blog posts on configuring Oracle VM Server for SPARC (also called Logical Domains) for availability. This series will show how to how to plan for availability, improve serviceability, avoid single points of failure, and provide resiliency against hardware and software failures. Availability is a broad topic that has filled entire books, so these posts will focus on aspects specifically related to Oracle VM Server for SPARC. The goal is to improve Reliability, Availability and Serviceability (RAS): An article defining RAS can be found here. Oracle VM Server for SPARC Principles for Availability Let's state some guiding principles for availability that apply to Oracle VM Server for SPARC: Avoid Single Points Of Failure (SPOFs). Systems should be configured so a component failure does not result in a loss of application service. The general method to avoid SPOFs is to provide redundancy so service can continue without interruption if a component fails. For a critical application there may be multiple levels of redundancy so multiple failures can be tolerated. Oracle VM Server for SPARC makes it possible to configure systems that avoid SPOFs. Configure for availability at a level of resource and effort consistent with business needs. Effort and resource should be consistent with business requirements. Production has different availability requirements than test/development, so it's worth expending resources to provide higher availability. Even within the category of production there may be different levels of criticality, outage tolerances, recovery and repair time requirements. Keep in mind that a simple design may be more understandable and effective than a complex design that attempts to "do everything". Design for availability at the appropriate tier or level of the platform stack. Availability can be provided in the application, in the database, or in the virtualization, hardware and network layers they depend on - or using a combination of all of them. It may not be necessary to engineer resilient virtualization for stateless web applications applications where availability is provided by a network load balancer, or for enterprise applications like Oracle Real Application Clusters (RAC) and WebLogic that provide their own resiliency. It's (often) the same architecture whether virtual or not: For example, providing resiliency against a lost device path or failing disk media is done for the same reasons and may use the same design whether in a domain or not. It's (often) the same technique whether using domains or not: Many configuration steps are the same. For example, configuring IPMP or creating a redundant ZFS pool is pretty much the same within the guest whether you're in a guest domain or not. There are configuration steps and choices for provisioning the guest with the virtual network and disk devices, which we will discuss. Sometimes it is different using domains: There are new resources to configure. Most notable is the use of alternate service domains, which provides resiliency in case of a domain failure, and also permits improved serviceability via "rolling upgrades". This is an important differentiator between Oracle VM Server for SPARC and traditional virtual machine environments where all virtual I/O is provided by a monolithic infrastructure that itself is a SPOF. Alternate service domains are widely used to provide resiliency in production logical domains environments. Some things are done via logical domains commands, and some are done in the guest: For example, with Oracle VM Server for SPARC we provide multiple network connections to the guest, and then configure network resiliency in the guest via IP Multi Pathing (IPMP) - essentially the same as for non-virtual systems. On the other hand, we configure virtual disk availability in the virtualization layer, and the guest sees an already-resilient disk without being aware of the details. These blogs will discuss configuration details like this. Live migration is not "high availability" in the sense of "continuous availability": If the server is down, then you don't live migrate from it! (A cluster or VM restart elsewhere would be used). However, live migration can be part of the RAS (Reliability, Availability, Serviceability) picture by improving Serviceability - you can move running domains off of a box before planned service or maintenance. The blog Best Practices - Live Migration on Oracle VM Server for SPARC discusses this. Topics Here are some of the topics that will be covered: Network availability using IP Multipathing and aggregates Disk path availability using virtual disks defined with multipath groups ("mpgroup") Disk media resiliency configuring for redundant disks that can tolerate media loss Multiple service domains - this is probably the most significant item and the one most specific to Oracle VM Server for SPARC. It is very widely deployed in production environments as the means to provide network and disk availability, but it can be confusing. Subsequent articles will describe why and how to configure multiple service domains. Note, for the sake of precision: an I/O domain is any domain that has a physical I/O resource (such as a PCIe bus root complex). A service domain is a domain providing virtual device services to other domains; it is almost always an I/O domain too (so it can have something to serve). Resources Here are some important links; we'll be drawing on their content in the next several articles: Oracle VM Server for SPARC Documentation Maximizing Application Reliability and Availability with SPARC T5 Servers whitepaper by Gary Combs Maximizing Application Reliability and Availability with the SPARC M5-32 Server whitepaper by Gary Combs Summary Oracle VM Server for SPARC offers features that can be used to provide highly-available environments. This and the following blog entries will describe how to plan and deploy them.

    Read the article

  • Assistance using respond_to to find the right actions to render PDF in ruby on rails

    - by Angela
    Hi, I am trying out Prince with the Princely plugin, which is supposed to format templates that have the .pdf into a PDF generator. Here is my controller: class TodoController < ApplicationController def show_date @date = Date.today @campaigns = Campaign.all @contacts = Contact.all @contacts.each do |contact| end respond_to do |format| format.html format.pdf do render :pdf => "filename", :stylesheets => ["application", "prince"], :layout => "pdf" end end end end I changed the routes.db to include the following: map.connect ':controller/:action.:format' map.todo "todo/today", :controller => "todo", :action => "show_date" My expected behavior is when I enter todo/today.pdf, it tries to execute show_date, but renders according to the princely plugin. Right now, it says cannot find action. What do I need to do to fix this?

    Read the article

  • Update User Info with restful_authentication plugin in Rails?

    - by benoror
    Hi people, I want to give the users the ability to change their account info with restful_authentication plugin in rails. I added this two methods to my users controller: def edit @user = User.find(params[:id]) end def update @user = User.find(params[:id]) # Only update password when necessary params[:user].delete(:password) if pàrams[:user][:password].blank? respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(@user) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end Also, I copied new.html.erb to edit.html.erb. Considering that resources are already defined in routes.rb I was expecting it to work easily, bute somehow when I click the save button it calls the create method, instead of update, using a POST http request. Inmediatly after that it autocatically log out form the session. Any ideas?

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Ruby on Rails - f.error_messages not showing up

    - by Brian Roisentul
    Hi, I've read many posts about this issue but I never got this to work. My model looks like this: class Announcement < ActiveRecord::Base validates_presence_of :title, :description end My controller's create method(only its relevant part) looks like this: def create respond_to do |format| if @announcement.save flash[:notice] = 'Announcement was successfully created.' format.html { redirect_to(@announcement) } format.xml { render :xml => @announcement, :status => :created, :location => @announcement } else @announcement = Announcement.new @provinces = Province.all @types = AnnouncementType.all @categories = Tag.find_by_sql 'select * from tags where parent_id=0 order by name asc' @subcategories= '' format.html { render :action => "new" } #new_announcement_path format.xml { render :xml => @announcement.errors, :status => :unprocessable_entity } end end end My form looks like this: <% form_for(@announcement) do |f| %> <%= error_messages_for 'announcement' %> <!--I've also treid f.error_messages--> ... What am I doing wrong?

    Read the article

  • How to add another OS entry in Wubi grub

    - by Amey Jah
    I am trying to install another linux distro besides ubuntu. However, I want to retain my existing windows based loader. Currently, as per my knowledge, MsDos loads grub which then loads Ubuntu (with loop back trick). Now, I have a new linux distro installed on /dev/sda8 (/boot for new distro) where as /root for that OS is installed on /dev/sda9. I tried following steps 1. Add entry into 40_custom of ubuntu grub 2. update grub But upon booting via that entry, it is not able to load the new OS and shows me blank screen. What could be the problem? Additional data: grub.cfg file of ubuntu menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { gfxmode $linux_gfx_mode insmod gzio insmod ntfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 01CD7BB998DB0870 else search --no-floppy --fs-uuid --set=root 01CD7BB998DB0870 fi loopback loop0 /ubuntu/disks/root.disk set root=(loop0) linux /boot/vmlinuz-3.5.0-19-generic root=UUID=01CD7BB998DB0870 loop=/ubuntu/disks/root.disk ro quiet splash $vt_handoff initrd /boot/initrd.img-3.5.0-19-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { menuentry 'Ubuntu, with Linux 3.5.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-19-generic-advanced-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { gfxmode $linux_gfx_mode insmod gzio insmod ntfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 01CD7BB998DB0870 else search --no-floppy --fs-uuid --set=root 01CD7BB998DB0870 fi loopback loop0 /ubuntu/disks/root.disk set root=(loop0) echo 'Loading Linux 3.5.0-19-generic ...' linux /boot/vmlinuz-3.5.0-19-generic root=UUID=01CD7BB998DB0870 loop=/ubuntu/disks/root.disk ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-19-generic } menuentry 'Ubuntu, with Linux 3.5.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.5.0-19-generic-recovery-fc296be2-8c59-4f21-a3f8-47c38cd0d537' { insmod gzio insmod ntfs set root='hd0,msdos5' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos5 --hint-efi=hd0,msdos5 --hint-baremetal=ahci0,msdos5 01CD7BB998DB0870 else search --no-floppy --fs-uuid --set=root 01CD7BB998DB0870 fi loopback loop0 /ubuntu/disks/root.disk set root=(loop0) echo 'Loading Linux 3.5.0-19-generic ...' linux /boot/vmlinuz-3.5.0-19-generic root=UUID=01CD7BB998DB0870 loop=/ubuntu/disks/root.disk ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.5.0-19-generic } } ### END /etc/grub.d/10_lupin ### menuentry 'Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-0f490b6c-e92d-42f0-88e1-0bd3c0d27641'{ load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos8' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos8 --hint-efi=hd0,msdos8 --hint-baremetal=ahci0,msdos8 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 else search --no-floppy --fs-uuid --set=root 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=0f490b6c-e92d-42f0-88e1-0bd3c0d27641 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux.img } menuentry 'Linux, with Linux core repo kernel (Fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-fallback-0f490b6c-e92d-42f0-88e1-0bd3c0d27641' { load_video set gfxpayload=keep insmod gzio insmod part_msdos insmod ext2 set root='hd0,msdos8' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos8 --hint-efi=hd0,msdos8 --hint-baremetal=ahci0,msdos8 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 else search --no-floppy --fs-uuid --set=root 0f490b6c-e92d-42f0-88e1-0bd3c0d27641 fi echo 'Loading Linux core repo kernel ...' linux /boot/vmlinuz-linux root=UUID=0f490b6c-e92d-42f0-88e1-0bd3c0d27641 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initramfs-linux-fallback.img } lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk +-sda1 8:1 0 39.2M 0 part +-sda2 8:2 0 19.8G 0 part +-sda3 8:3 0 205.1G 0 part +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 333.7G 0 part /host +-sda6 8:6 0 233.4G 0 part +-sda7 8:7 0 100.4G 0 part +-sda8 8:8 0 100M 0 part +-sda9 8:9 0 14.7G 0 part +-sda10 8:10 0 21.4G 0 part +-sda11 8:11 0 3G 0 part sr0 11:0 1 1024M 0 rom loop0 7:0 0 29G 0 loop / blkid /dev/loop0: UUID="fc296be2-8c59-4f21-a3f8-47c38cd0d537" TYPE="ext4" /dev/sda1: SEC_TYPE="msdos" LABEL="DellUtility" UUID="5450-4444" TYPE="vfat" /dev/sda2: LABEL="RECOVERY" UUID="78C4FAC1C4FA80A4" TYPE="ntfs" /dev/sda3: LABEL="OS" UUID="DACEFCF1CEFCC6B3" TYPE="ntfs" /dev/sda5: UUID="01CD7BB998DB0870" TYPE="ntfs" /dev/sda6: UUID="01CD7BB99CA3F750" TYPE="ntfs" /dev/sda7: LABEL="Windows 8" UUID="01CDBFB52F925F40" TYPE="ntfs" /dev/sda8: UUID="cdbb5770-d29c-401d-850d-ee30a048ca5e" TYPE="ext2" /dev/sda9: UUID="0f490b6c-e92d-42f0-88e1-0bd3c0d27641" TYPE="ext2" /dev/sda10: UUID="2e7682e5-8917-4edc-9bf9-044fea2ad738" TYPE="ext2" /dev/sda11: UUID="6081da70-d622-42b9-b489-309f922b284e" TYPE="swap Any help is appreciated. Please let me know if you need any extra data.

    Read the article

  • How to update user info with restful_authentication plugin in Rails?

    - by benoror
    Hi people, I want to give the users to change their account info with restful_authentication plugin in rails. I added this two methods to my controller: def edit @user = User.find(params[:id]) end def update @user = User.find(params[:id]) # Only update password when necessary params[:user].delete(:password) if pàrams[:user][:password].blank? respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(@user) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end Also, I copied new.html.erb to edit.html.erb. Considering that resources are already defined in routes.rb I was expecting it to work easily, bute somehow when I click the save button it calls the create method, instead of update, using a POST http request. Any ideas?

    Read the article

  • Python Importing object that originates in one module from a different module into a third module

    - by adewinter
    I was reading the sourcode for a python project and came across the following line: from couchexport.export import Format (source: https://github.com/wbnigeria/couchexport/blob/master/couchexport/views.py#L1 ) I went over to couchexport/export.py to see what Format was (Class? Dict? something else?). Unfortunately Format isn't in that file. export.py does however import a Format from couchexport.models where there is a Format class (source: https://github.com/wbnigeria/couchexport/blob/master/couchexport/models.py#L11). When I open up the original file in my IDE and have it look up the declaration, in line I mentioned at the start of this question, it leads directly to models.py. What's going on? How can an import from one file (export.py) actually be an import from another file (models.py) without being explicitly stated?

    Read the article

  • DecimalFormat and Double.valueOf()

    - by folone
    Hello. I'm trying to get rid of unnecessary symbols after decimal seperator of my double value. I'm doing it this way: DecimalFormat format = new DecimalFormat("#.#####"); value = Double.valueOf(format.format(41251.50000000012343)); But when I run this code, it throws: java.lang.NumberFormatException: For input string: "41251,5" at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1224) at java.lang.Double.valueOf(Double.java:447) at ... As I see, Double.valueOf() works great with strings like "11.1", but it chokes on strings like "11,1". How do I work around this? Is there a more elegant way then something like Double.valueOf(format.format(41251.50000000012343).replaceAll(",", ".")); Is there a way to override the default decimal separator value of DecimalFormat class?

    Read the article

  • Formatting my String

    - by pringlesinn
    I need to write currency values like $35.40 (thirty five dollars and forty cents) and after that, i want to write some "****" so at the end it will be: thirty five dollars and forty cents********* in a maximun of 100 characters I've asked a question about something very likely but I couldn't understand the main command. String format = String.format("%%-%ds", 100); String valorPorExtenso = String.format(format, new Extenso(tituloTO.getValor()).toString()); What do I need to change on format to put *** at the end of my sentence? The way it is now it puts spaces.

    Read the article

  • Undefined method 'total_entries' after upgrading Rails 2.2.2 to 2.3.5

    - by Trevor
    I am upgrading a Rails application from 2.2.2 to 2.3.5. The only remaining error is when I invoke total_entries for creating a jqgrid. Error: NoMethodError (undefined method `total_entries' for #<Array:0xbbe9ab0>) Code snippet: @route = Route.find( :all, :conditions => "id in (#{params[:id]})" ) { if params[:page].present? then paginate :page => params[:page], :per_page => params[:rows] order_by "#{params[:sidx]} #{params[:sord]}" end } respond_to do |format| format.html # show.html.erb format.xml { render :xml => @route } format.json { render :json => @route } format.jgrid { render :json => @route.to_jqgrid_json( [ :id, :name ], params[:page], params[:rows], @route.total_entries ) } end Any ideas? Thanks!

    Read the article

  • How do I set up my @product=Product.find(params[:id]) to have a product_url?

    - by montooner
    Trying to recreate { script/generate scaffold }, and I've gotten thru a number of Rails basics. I suspect that I need to configure default product url somewhere. But where do I do this? Setup: Have: def edit { @product=Product.find(params[:id]) } Have edit.html.erb, with an edit form posting to action = :create Have def create { ... }, with the code redirect_to(@product, ...) Getting error: undefined method `product_url' for #< ProductsController:0x56102b0 My def update: def update @product = Product.find(params[:id]) respond_to do |format| if @product.update_attributes(params[:product]) format.html { redirect_to(@product, :notice => 'Product was successfully updated.') } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @product.errors, :status => :unprocessable_entity } end end end

    Read the article

  • JSON is not nested in rails view

    - by SeanGeneva
    I have a several models in a heirarchy, 1:many at each level. Each class is associated only with the class above it and the one below it, ie: L1 course, L2 unit, L3 unit layout, L4 layout fields, L5 table fields (not in code, but a sibling of layout fields) I am trying to build a JSON response of the entire hierarchy. def show @course = Course.find(params[:id]) respond_to do |format| format.html # show.html.erb format.json do @course = Course.find(params[:id]) @units = @course.units.all @unit_layouts = UnitLayout.where(:unit_id => @units) @layout_fields = LayoutField.where(:unit_layout_id => @unit_layouts) response = {:course => @course, :units => @units, :unit_layouts => @unit_layouts, :layout_fields => @layout_fields} respond_to do |format| format.json {render :json => response } end end end end The code is bring back the correct values, but the units, unit_layouts and layout_fields are all nested at the same level under course. I would like them to be nested inside their parent.

    Read the article

  • What is the best way to handle dynamic content_type in Sinatra

    - by lusis
    I'm currently doing the following but it feels "kludgy": module Sinatra module DynFormat def dform(data,ct) if ct == 'xml';return data.to_xml;end if ct == 'json';return data.to_json;end end end helpers DynFormat end My goal is to plan ahead. Right now we're only working with XML for this particular web service but we want to move over to JSON as soon as all the components in our stack support it. Here's a sample route: get '/api/people/named/:name/:format' do format = params[:format] h = {'xml' => 'text/xml','json' => 'application/json'} content_type h[format], :charset => 'utf-8' person = params[:name] salesperson = Salespeople.find(:all, :conditions => ['name LIKE ?', "%#{person}%"]) "#{dform(salesperson,format)}" end It just feels like I'm not doing it the best way possible.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >