Search Results

Search found 24675 results on 987 pages for 'table'.

Page 409/987 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Export mysql database tables to php code to create same tables in other database?

    - by chefnelone
    How do I Export mysql database tables to php code so that it allows me to create and populate same tables in other database? I have a local database, I exported to sql syntax, then I get something like: CREATE TABLE `boletinSuscritos` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(120) NOT NULL, `email` varchar(120) NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ; INSERT INTO `boletinSuscritos` VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12'); INSERT INTO `boletinSuscritos` VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56'); but I need it to be: (Is there any way to export the tables in this way) $sql = "CREATE TABLE boletinSuscritos ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(120) NOT NULL, email varchar(120) NOT NULL, date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY ( id ) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 )"; mysql_query($sql,$conexion); mysql_query("INSERT INTO boletinSuscritos VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12')"); mysql_query("INSERT INTO boletinSuscritos VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56')");

    Read the article

  • CodePlex Daily Summary for Friday, April 09, 2010

    CodePlex Daily Summary for Friday, April 09, 2010New Projects(SocketCoder) Free Silverlight Voice/Video Conferencing Modules: The Goal of this project is to provide complete Open Source Voice/Video Chatting Client/Server Modules Using Silverlight techniques, this project i...AJAX Control Framework: Do PageMethods and the UpdatePanel make you feel dirty? Think making AJAX enabled custom ASP.NET controls should WAY easier than it is? Wish ASP.NE...Bluetooth Radar: WPF 4.0 Application working with The final release of 32feet.net (v2.2) to Discover Bluetooth devices, send files and more cool stuff for Bluetooth...Bomberman: Bomberman c++ Project Code Library: This is just a personal storage place for a utility library containing extension methods, new classes, and/or improvements to existing classes.DianPing.com MogileFS Client: MogileFS Client for .Net 2.0Dirty City Hearts Website: Dirty City Hearts WebsiteDocGen - SharePoint 2010 Bulk Document Loader: DocGen is a SharePoint 2010 multithreaded console application for bulk loading sample documents into SharePoint. This program generates Microsoft ...dou24: WebSite for DOUExplora: Explora es un navegador de archivos que no pretende ser un sustituto del explorador de Windows, sino un experimento de codificación que compartir c...HobbyBrew Mobile: This project is basic beer brewing software for Windows Mobile able to read HobbyBrew xml files. Developed in C# and Windows FormsjLight: Interop between Silverlight and the javascript based on jQuery. The syntax used in Silverlight is as close as posible to the jQuery syntax.johandekoning.nl samples: Sample code project which are discussed on johandekoning.nl / johandekoning.com. Most examples are / will be developed with C#Kanban: this is a agile paroject managementMETAR.NET Decoder: Project libraries used to decode airport METAR weather information into adequate data types, change them and back, create resulting METAR informati...Micro Framework: MFDeploy with Set/Get mote SKU ID: This is a modification to the Micro Framework's MFDeploy utility that lets the user set and get the mote's ID (aka SKU). It can be done via the GUI...MobySharp: MobySharp is a implementation of the Mobypicture.com API written in C#NGilead: NGilead permits you to use your NHibernate POCO (and especially the partially loaded ones) outside the .NET Virtual Machine (to Silverlight for exa...OpenIdPortableArea: OpenIdPortableArea is an MvcContrib powered Portable Area that encapsulates logic for implementing OpenId encapsulation (using DotNetOpenAuth).OrderToList Extension for IEnumerable: An extension method for IEnumerable<T> that will sort the IEnumerable based on a list of keys. Suppose you have a list of IDs {10, 5, 12} and wa...project3140.org: Code repository for project3140.org.Prometheus Backup Solution: The Prometheus Backup Solution is a free and small Backup Utility for personal use and for small businesses.Roids: an asteroids clone for Silverlight and XNA: An example of a simple game cross-compiling for both Silverlight and XNA using SilverSprite.SemanticAnalyzer: 3rd phase of Compiler Design ProjectSSRS SDK for PHP: SQL Server Reporting Service SDK for PHPWorking Memory Workout: Working Memory Workout is a working memory training game based on the N-back, a task researchers say may improve fluid intelligence. It greatly ex...Wouters Code Samples: This Project will host some of my sample projects I created. I'm a professional SharePoint/BizTalk developer so most of the provided samples will ...New Releases(SocketCoder) Free Silverlight Voice/Video Conferencing Modules: Silverlight Voice Video Chat Modules: Client/Server Silverlight Voice Video Chat ModulesAccessibilityChecker: Accessibility Checker V0.2: Accessibility Checker V0.2 - Direct url´s input functionality added - XHTML, WAI validation modules, easy to extend. (W3C and Achecker modules incl...AStar.net: AStar.net 1.1 downloads: AStar.net 1.1 Version detailsGreatly improved path finding speed and memory usage from version 1.0. Avalaible downloads:AStar.net 1.1 dll - Runtim...AutoPoco: AutoPoco 0.2: This release will bring some non-generic alternatives to configuration + some more automatic configuration options such as assembly scanningBluetooth Radar: Version 1: Basic version only with the ability to discover Bluetooth devices around you.Convert-Media PowerShell Module for Expression Encoder: Release 1.0.0.2: This is a build that incorporates the latest change sets including perform publish. No other changesDevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, DevTreks alpha 3e: Alpha 3e is a general debug. It also upgrades the software's family budgeting capabilities, including the addition of a new 'Food Nutrition Input'...dV2t Enterprise Library: dV2tEntLib 1.0.0.3: dV2tEntLib 1.0.0.3EnhSim: Release v1.9.8.3: Release v1.9.8.3 Change Armour Penetration calcs to apply the "Rouncer fix" (current version displays debug info to assist users in testing that th...HouseFly controls: HouseFly controls alpha 0.9: HouseFly controls 0.9 alpha binaries (Includes HouseFly.Classes and HouseFly.Controls).Jitbit WYSWYG BBCode Editor: Release: ReleaseMicro Framework: MFDeploy with Set/Get mote SKU ID: MFDeploy with get, set mote ID: The Micro Framework 4.0 MFDeploy, modified to let the user get & set the mote IDMobySharp: MobySharp 1.0: Initial ReleaseOpenIdPortableArea: OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...OrderToList Extension for IEnumerable: Release 0.9b: I'm calling this 0.9 because I came up with it yesterday and there's little real word use so there's probably something that needs fixing or improv...Prometheus Backup Solution: Prometheus BETA: Actual BETA Release. Restore Functions are not available...Reusable Library: V1.0.6: A collection of reusable abstractions for enterprise application developer.Reusable Library Demo: V1.0.4: A demonstration of reusable abstractions for enterprise application developerSharePoint Labs: SPLab4005A-FRA-Level100: SPLab4005A-FRA-Level100 This SharePoint Lab will teach you the 5th best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab6001A-FRA-Level200: SPLab6001A-FRA-Level200 This SharePoint Lab will teach you how to create a generic Feature Receiver within Visual Studio. Creating a Feature Receiv...SharePoint LogViewer: SharePoint LogViewer 2.0: Supports live Farm monitoring. Many bug fixes.Simple Savant: Simple Savant v0.5: Added support for custom constraint/validation logic (See Versioning and Consistency) Added support for reliable cross-domain writes (See Version...SQL Server Extended Properties Quick Editor: Release 1.6.1: Whats new in 1.6.1: Add an edit form to support long text editing. double click to open editor. Add an ORM extended properties initializer to creat...SSRS SDK for PHP: SSRS SDK for PHP: Current release includes the SSRSReport library to connect to SQL Server Reporting Services and a sample application to show the basic steps needed...Table Storage Backup & Restore for Windows Azure: Table Storage Backup 1.0.3751: Bug fix: Crash when creating a table if the existing table had not finished deleting. Bug fix: Incorrect batch URI if the storage account ended in ...VCC: Latest build, v2.1.30408.0: Automatic drop of latest buildVisual Studio DSite: Audio Player (Visual C++ 2008): An audio player that can play wav files.Working Memory Workout: Working Memory Workout 1.0: Working Memory Workout is a working memory trainer based on the N-back memory task.Wouters Code Samples: XMLReceiveCBR: This is a Custom Pipeline component. It will help you create a Content Based Routing solution in combination of a WCF Requst/Response service. Gene...Xen: Graphics API for XNA: Xen 1.8: Version 1.8 (XNA 3.1) This update fixes a number of bugs in several areas of the API and introduces a large new Tutorial. [Added] L2 Spherical Ha...Most Popular ProjectsWBFS ManagerRawrMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsnopCommerce. Open Source online shop e-commerce solution.Shweet: SharePoint 2010 Team Messaging built with PexRawrAutoPocopatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleFacebook Developer ToolkitFarseer Physics EngineNcqrs Framework - The CQRS framework for .NET

    Read the article

  • Oracle University has released “Oracle AIA Foundation Pack 11g: Developing Applications” in the Training on Demand format (TOD)

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} In this course, you will learn how to quickly develop integrations using Application Integration Architecture (AIA) Foundation Pack 11g that run on Oracle Fusion Middleware. You’ll learn to: Design and create Application Business Connector Services to integrate applications into AIA Create Enterprise Business Services to perform specific business activities Configure Guaranteed Message Delivery to ensure no loss of messages Extend Enterprise Business Objects and Application Business Connector Services to meet Corporate requirements This course is available now in Training on Demand format. Training On Demand Features are: Delivered by top instructors Video of classroom lecture, whiteboarding, labs Hands-on practice environment Ask your instructor Bonus material from product experts Why Choose On Demand? Start training within 24 hours Get full classroom content online Customize your learning experience Eliminate travel-related expenses Access anytime, anywhere 24/7 You'll find more information here.

    Read the article

  • SQL SERVER – Disk Space Monitoring – Detecting Low Disk Space on Server

    - by Pinal Dave
    A very common question I often receive is how to detect if the disk space is running low on SQL Server. There are two different ways to do the same. I personally prefer method 2 as that is very easy to use and I can use it creatively along with database name. Method 1: EXEC MASTER..xp_fixeddrives GO Above query will return us two columns, drive name and MB free. If we want to use this data in our query, we will have to create a temporary table and insert the data from this stored procedure into the temporary table and use it. Method 2: SELECT DISTINCT dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will give us three columns: drive logical name, drive letter and free space in MB. We can further modify above query to also include database name in the query as well. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO This will give us additional data about which database is placed on which drive. If you see a database name multiple times, it is because your database has multiple files and they are on different drives. You can modify above query one more time to even include the details of actual file location. SELECT DISTINCT DB_NAME(dovs.database_id) DBName, mf.physical_name PhysicalFileLocation, dovs.logical_volume_name AS LogicalName, dovs.volume_mount_point AS Drive, CONVERT(INT,dovs.available_bytes/1048576.0) AS FreeSpaceInMB FROM sys.master_files mf CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) dovs ORDER BY FreeSpaceInMB ASC GO The above query will now additionally include the physical file location as well. As I mentioned earlier, I prefer method 2 as I can creatively use it as per the business need. Let me know which method are you using in your production server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Oracle LogMiner (Unsupported Operation)

    - by Sarith
    Hi all, I'm currently using Oracle 11g on RHEL5. I'm digging to see what are in the archived log files. After I query from v$logmnr_contents, I see many transactions of UNSUPPORTED operation. What do these unsupported transactions mean? I think that it's the cause that make my database generates lots of archived logs. Moreover, I'm using global temporary table for generating reports. I discover that when I insert and delete from those temporary table, it also records in the archived log file. How to do to reduce those recorded transactions? Regards, Sarith

    Read the article

  • SQL SERVER – Copy Statistics from One Server to Another Server

    - by pinaldave
    I was recently working on a performance tuning project in Dubai (yeah I was able to see the tallest tower from the window of my work place). I had a very interesting learning experience there. There was a situation where we wanted to receive the schema of original database from a certain client. However, the client was not able to provide us any data due to privacy issues. The schema was very important because without having an access to underlying data, it was a bit difficult to judge the queries etc. For example, without any primary data, all the queries are running in 0 (zero) milliseconds and all were using nested loop as there were no data to be returned. Even though we had CPU offending queries, they were not doing anything without the data in the tables. This was really a challenge as I did not have access to production server data and I could not recreate the scenarios as production without data. Well, I was confused but Ruben from Solid Quality Mentors, Spain taught me new tricks. He suggested that when table schema is generated, we can create the statistics consequently. Here is how we had done that: Once statistics is created along with the schema, without data in the table, all the queries will work as how they will work on production server. This way, without access to the data, we were able to recreate the same scenario as production server on development server. When observed at the script, you will find that the statistics were also generated along with the query. You will find statistics included in WITH STATS_STREAM clause. What a very simple and effective script. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • After restoring a SQL Server database from another server - get login fails

    - by Renso
    Issue: After you have restored a sql server database from another server, lets say from production to a Q/A environment, you get the "Login Fails" message for your service account. Reason: User logon information is stored in the syslogins table in the master database. By changing servers, or by altering this information by rebuilding or restoring an old version of the master database, the information may be different from when the user database dump was created. If logons do not exist for the users, they will receive an error indicating "Login failed" while attempting to log on to the server. If the user logons do exist, but the SUID values (for 6.x) or SID values (for 7.0) in master..syslogins and the sysusers table in the user database differ, the users may have different permissions than expected in the user database. Solution: Links a user entry in the sys.database_principals system catalog view in the current database to a SQL Server login of the same name. If a login with the same name does not exist, one will be created. Examine the result from the Auto_Fix statement to confirm that the correct link is in fact made. Avoid using Auto_Fix in security-sensitive situations. When you use Auto_Fix, you must specify user and password if the login does not already exist, otherwise you must specify user but password will be ignored. login must be NULL. user must be a valid user in the current database. The login cannot have another user mapped to it. execute the following stored procedure, in this example the login user name is "MyUser" exec sp_change_users_login 'Auto_Fix', 'MyUser'   NOTE: sp_change_users_login cannot be used with a SQL Server login created from a Windows principal or with a user created by using CREATE USER WITHOUT LOGIN.

    Read the article

  • The “AfterDark” Reception Is Back!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This year, the OPN Exchange “AfterDark” Reception is moving to new heights! Join us on the 5th floor of the Metreon building in San Francisco for this exclusive ‘VIP’ event. The reception will be held from 7:30 p.m. – 10 p.m. on Sunday, September 30th. Enjoy the smooth sounds of Macy Gray over a cocktail, as you network the night away and watch the 2012 live Music Festival performances from above! Best of all, this event is exclusive and free to all Oracle PartnerNetwork Exchange attendees! So come mix and mingle with us as we kick-off Oracle OpenWorld 2012 with great conversation and music! See You After Dark! The OPN Communications Team

    Read the article

  • Issue with resetting auto increment from default to big number

    - by Sai Srikanth
    I have a MySQL table naming Invoice for a Inventory Monitoring site, invoice_number is bigint(19) AUTO_INCREMENT field. Currently AUTO_INCREMENT value is 1. Client want it to start the invoice_number from 50000. With the following script reset the ALTER TABLE INVOICES AUTO_INCREMENT = 50000; When I wrote an Insert Script to insert data in SQLDBX, it is putting the invoice_number from 50000. But when i am trying to do insert a record using the application(web application), the invoice_number value is starting from 1. We are making use of Spring-JDBC template to insert data into mysql database.

    Read the article

  • How to reinstall Mac OS X on OS X/Linux dual-boot system?

    - by strangeronyourtrain
    My setup: I have a MacBook Pro 5,5 with a Mac OS X Snow Leopard partition and a Linux partition. I use rEFIt to boot into Linux. I didn't use Boot Camp when I originally installed Linux; instead, I manually created the partition (with either Disk Utility in OS X or Gparted on a Linux live CD--I don't recall which one) and then installed Linux on it from a live CD. The problem: My OS X partition is corrupt, and I need to reinstall Snow Leopard. Since I installed rEFIt from within OS X, I'm concerned that wiping the OS X partition will prevent me from booting into my Linux partition. How can I do this without losing access to my Linux partition? Is it possible to install Snow Leopard on the partition I reserved for it, or will it automatically overwrite the entire drive? And if I do the fresh OS X install and then install rEFIt again, will it automatically recognize my Linux partition? Thanks for any tips! Specs: MacBook Pro 5,5 (Mid-2009); Snow Leopard 10.6.7/64-bit Sabayon Linux, 2.6.36 kernel EDIT/UPDATE: Thanks, but the situation has taken a more complicated turn: I tried to reinstall Snow Leopard from the DVD, but it refused to install onto my Mac partition, claiming: "The disk cannot be used to start up your computer." Disk Utility wouldn't let me resize the partition or create a new one, and it doesn't see my Linux partition. It only displays the two partitions "Macintosh HD" and Linux Swap. I can, however, see all the partitions from Linux. This is the partition table as shown in Gparted: And the output of "fdisk -l" is: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 409639 204819+ ee GPT /dev/sda2 409640 349590464 174590412+ af HFS / HFS+ /dev/sda3 483122745 488392064 2634660 82 Linux swap / Solaris /dev/sda4 * 349590465 483122744 66766140 83 Linux Partition table entries are not in disk order I wonder if this is because I originally partitioned my disk with Gparted instead of OS X's Disk Utility (at this point, I don't recall whether I used Gparted or Disk Utility). In any case, it doesn't seem safe to do any reformatting with Disk Utility now, as I'm afraid it will wipe sda2 ("Macintosh HD") as well as sda4 (my Linux partition). So... I'm hoping to find a solution that doesn't involve wiping my entire hard disk. Would it be safe/possible to use Gparted to erase sda2 ("Macintosh HD") and then use the Snow Leopard DVD to install OS X onto [I]just[/I] sda2 without touching the other partitions? Thanks for any insight!

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • New whitepaper, “Why Oracle Sun ZFS Storage Appliance for Oracle Databases?” now available.

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Databases are the backbone of today’s modern business providing transaction integrity for key business systems such as payment engines or providing the core of analytical data for decision-making. These diverse use cases require a flexible, high performance and highly available storage platform. The ZFS Storage Appliance is ideally suited with its architecture providing a platform flexible enough to meet the ever-changing availability, capacity and performance requirements from the business. In this just published white paper the authors provide both business and technical evidence of the suitability of the Oracle ZSF Storage Appliance as primary storage for Oracle Database 11gR2 environments. Click here to download the whitepaper.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • CodePlex Daily Summary for Wednesday, March 24, 2010

    CodePlex Daily Summary for Wednesday, March 24, 2010New ProjectsC++ Sparse Matrix Package: This is a package containing sparse matrix operations like multiplication, addition, Cholesky decomposition, inversions and so on. It is a simple, ...Change Password Web Part for FBA-ADAM User: This web part enables users to change ADAM (Active Directory Application Mode) password from within a SharePoint Site Collection. It is compatible ...DAMAJ Investigator: The Purpose (Mission) The purpose of this project is to build a tool to help developers do rationale investigations. The tool should synthesize...DotNetWinService: DotNetWinService offers a very simple framework to declaratively implement scheduled task inside a Windows Service.internshipgameloft: <project name> makes it easier for <target user group> to <activity>. You'll no longer have to <activity>. It's developed in <programming language>.JavaScript Grid: JavaScript grid make it easiser to display tabular data on web pages. Main benefits 1 - Smart scrolling: you can handle scrolling events to load...Mirror Testing Software: Program určený pre správu zariadenia na testovanie automobilových zrkadiel po opustení výrobnej linky. (tiež End of Line Tester). Vývoj prebieha v ...NPipeline: NPipeline is a .NET port of the Apache Commons Pipeline components. It is a lightweight set of utilities that make it simple to implement paralleli...Portable Contacts: .net implementation of the Portable Contacts 1.0 Draft C specification Random Projects: Some projects that I will be doing from now and on to next year.SmartInspect Unity Interception Extension: This a library to integrate and use the SmartInspect logging tool with the Unity dependency injection and AOP framework. Various attributes help yo...Table2Class: Table2Class is a solution to create .NET class files (in C# or VB.NET) from tables in database (Microsoft SQL Server or MySQL databases)UploadTransform: A project for the uploading and trasnformation of client data to a database backend Wikiplexcontrib: This is the contrib project for wikiplex.zevenseas Notifier: Little project that displays a notification on every page within a WebApplication of SharePoint. The message of the notification is centrally manag...New ReleasesAcceptance Test Excel Addin: 1.0.0.1: Fixed two bugs: 1) highlight incorrectly when data table has filter 2) crash when named range is invalidC++ Sparse Matrix Package: v1.0: Initial release. Read the README.txt file for more information.Change Password Web Part for FBA-ADAM User: Change Password Web Part for FBA-ADAM User: Usage Instruction Add following in your web.config under <appSettings> <add key="AdamServerName" value="Your Server Name" /> <add key="AdamSourc...CollectAndCrop: spring release: This release includes the YUI compressor for .net http://yuicompressor.codeplex.com/ There are 2 new properties: CompressCss a boolean that turns...EnhSim: Release v1.9.8.0: Release v1.9.8.0Flame Shock dots can now produce critical strikes Flame Shock dots are now affected by spell haste Searing Totem and Magma Totem we...EPiServer CMS Page Type Builder: Page Type Builder 1.2 Beta 1: First release that targets EPiServer CMS version 6. While it is most likely stable further testing is needed.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.6.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server New Features Improved performance. Named ranges Font-styling added to charts and ...Image Ripper: Image Ripper: Fetch HD photos from specific web galleries like a charm.IronRuby: 1.0 RC4: The IronRuby team is pleased to announce version 1.0 RC4! As IronRuby approaches the final 1.0, these RCs will contain crucial bug fixes and enhanc...IST435: AJAX Demo: Demo of AJAX Control Toolkit extenders.IST435: Representing Friendships: This sample is a quick'n'dirty demo of how you can implement the general concept of setting up Friendships among users based on the Membership Fram...JavaScript Grid: Initial release: Initial release contains all source codes and two exampleskdar: KDAR 0.0.17: KDAR - Kernel Debugger Anti Rootkit - npfs.sys, msfs.sys, mup.sys checks added - fastfat.sys FAST I/O table check addedMicrosoft - DDD NLayerApp .NET 4.0 Example (Microsoft Spain): V0.6 - N-Layer DDD Sample App: Required Software (Microsoft Base Software needed for Development environment) Unity Application Block 1.2 - October 2008 http://www.microsoft.com/...Mytrip.Mvc: Mytrip 1.0 preview 2: Article Manager Blog Manager EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ClientValidation ThemeNetBuildConfigurator: Using NetBuildConfigurator Screencast: A demo and Screencast of using BuildConfigurator.NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.120: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version provi...NoteExpress User Tools (NEUT) - Do it by ourselves!: NoteExpress User Tools 1.9.0: 1.9.0 测试版本:NoteExpress 2.5.0.1147 #针对1147的改动Open NFe: DANFe v1.9.7: Envio de e-mailpatterns & practices - Windows Azure Guidance: Code drop 2: This is the first step in taking a-Expense to Windows Azure. Highlights of this release are: Use of SQL Azure as the backend store for applicatio...patterns & practices - Windows Azure Guidance: Music Store sample application: Music Store is the sample application included in the Web Client Guidance project. We modified it so it now has a real data access layer, uses most...Quick Anime Renamer: Quick Anime Renamer v0.2: Quick Anime Renamer v0.2 - updated 3/23/2010Fixed some painting errorsFixed tab orderRandom Projects: Simple Chat Script: This contains chat commands for CONSTRUCTION serversRapidshare Episode Downloader: RED v0.8.1: - Fixed numerous bugs - Added Next Episode feature - Made episode checking run in background thread - Extended both API's to be more versatile - Pr...Rapidshare Episode Downloader: RED v0.8.2: - Fixed the list to update air date automatically when checking for episodes availabilitySelection Maker: Selection Maker 1.3: New Features:Now the ListView can show Icon of files. Better performance while showing files in ListViewSprite Sheet Packer: 2.2 Release: Made generation of map file optional in sspack and UI Fixed bug with image/map files being locked after first build requiring a restart to build ...Table Storage Backup & Restore for Windows Azure: TableStorageBackup: Table Storage Backup & RestoreTable2Class: Table2Class v1.0: Download do Solution do Visual Studio 2008 com os seguintes projetos: Table2Class.ClassMaker Projeto Windows Form que contempla o Class Maker. Ta...VBScript Login Script Creator: Login Script Creator 1.5: Removed IE7 option. Removed Internet Explorer temporary internet files option. Added overlay option. Added additional redirects for My Photos, My ...VCC: Latest build, v2.1.30323.0: Automatic drop of latest buildXAML Code Snippets addin for Visual Studio 2010: First release: This version targets Visual Studio 2010 Release Candidate. Please consider this release as a Beta. Also provide feedback so that it can be improve...Zeta Long Paths: Release 2010-03-24: Added functions to get file owner, creation time, last access time, last write time.ZZZ CMS: Release 3.0.0: With mobile version of frontend.Most Popular ProjectsMetaSharpRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesFarseer Physics EngineBlogEngine.NETLINQ to TwitterFacebook Developer ToolkitNB_Store - Free DotNetNuke Ecommerce Catalog ModulePHPExcelTable2Classpatterns & practices: Composite WPF and Silverlight

    Read the article

  • slow query logging in mysql server

    - by Vinayak Mahadevan
    Hi I have installed MySQL Community Edition 5.1.41 on a windows 2000 server. In my.ini file I have enabled slow query logging and have redirected the output to a table.I have set the long_query_time to 10 seconds. Then after running some queries I checked up the slow query log table and found that all the queries which were executed have been logged and a file called database-slow.log has also been created in the data folder. Can anybody please tell me where I am going wrong. I am using the inbuilt innodb and not activated the innodb plugin. Thanks

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • How do I send traffic from my Mac's wifi to my VPN client?

    - by Heath Borders
    I need to connect my Android to a Juniper VPN. Unfortunately, Juniper doesn't support Android on our VPN version. We've already put in a feature request for it, but we have no idea how long it will take to be complete. Right now, I connect to the Juniper VPN with a Juniper Mac OSX VPN client that uses Java to install kernel extensions to start and stop the VPN. Thus, I can't use the Network panel in System Preferences to create a VPN device, which means it won't show up in the 'Sharing' panel's Internet Sharing Share your connection from: menu, as suggested here. I used newproc.d to see what /usr/libexec/InternetSharing did when it ran, and it runs the following processes: 2013 Nov 1 00:26:54 5565 <1> 64b /usr/libexec/launchdadd 2013 Nov 1 00:26:55 5566 <1> 64b /usr/libexec/InternetSharing 2013 Nov 1 00:26:56 5568 <5566> 64b natpmpd -d -y bridge100 en0 2013 Nov 1 00:26:56 5569 <1> 64b /usr/libexec/pfd -d 2013 Nov 1 00:26:56 5567 <5566> 64b bootpd -d -P My Juniper VPN client creates the following devices (output of ifconfig): jnc0: flags=841<UP,RUNNING,SIMPLEX> mtu 1400 inet 10.61.9.61 netmask 0xffffffff open (pid 920) jnc1: flags=841<UP,RUNNING,SIMPLEX> mtu 1450 closed So, it seems like I should just be able to do this and have everything work: sudo killall -9 natpmpd sudo /usr/libexec/natpmpd -y bridge100 jnc0 My android connected fine and could hit public internet sites, but it couldn't hit private VPN sites. I assume this is because I need to change the routes that /usr/libexec/InternetSharing sets up. This is the output from sudo pfctl -s all before starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all rdr-anchor "com.apple/*" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble anchor "com.apple/*" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all INFO: Status: Disabled for 0 days 00:11:02 Debug: Urgent State Table Total Rate current entries 0 searches 22875 34.6/s inserts 1558 2.4/s removals 1558 2.4/s Counters match 2005 3.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 0.0/s proto-cksum 0 0.0/s state-mismatch 1 0.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 OS FINGERPRINTS: 696 fingerprints loaded This is the output from sudo pfctl -s all after starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all nat-anchor "com.apple.internet-sharing" all rdr-anchor "com.apple/*" all rdr-anchor "com.apple.internet-sharing" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble scrub-anchor "com.apple.internet-sharing" all fragment reassemble anchor "com.apple/*" all anchor "com.apple.internet-sharing" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all STATES: ALL tcp 10.0.1.32:50593 -> 74.125.225.113:443 SYN_SENT:CLOSED ALL udp 10.0.1.32:61534 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:55433 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:64041 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL tcp 10.0.1.32:50619 -> 74.125.225.131:443 SYN_SENT:CLOSED INFO: Status: Enabled for 0 days 00:00:01 Debug: Urgent State Table Total Rate current entries 5 searches 22886 22886.0/s inserts 1563 1563.0/s removals 1558 1558.0/s Counters match 2010 2010.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 12.0/s proto-cksum 0 0.0/s state-mismatch 1 1.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 TABLES: OS FINGERPRINTS: 696 fingerprints loaded It looks like I need to change the pf settings that /usr/libexec/InternetSharing set up, but I have no idea how to do that.

    Read the article

  • Use Microsoft PowerPivot to Access Salesforce.com Through the OData Connector

    - by dataintegration
    This article will explain how to connect to any of our OData Connectors with Microsoft Excel's PowerPivot business intelligence tool. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and PowerPivot for Excel from Microsoft. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Excel and select the PowerPivot tab at the top of the window. Step 4: Here you will click on the button labeled PowerPivot Window at the top left. Step 5: A new pop up will appear. Now select the option "From Data Feeds". Step 6: In the resulting Table Import Wizard you will enter the OData URL of the Salesforce Connector. You can find this by clicking on the Settings tab of the Salesforce Connector. It will look something like this: http://localhost:8181/sfconnector/data/conn/odata.rsc. You will also need to add authentication options in this step. To do this, click on the Advanced button and scroll down to the Security section of the resulting pop up window. Change the Integrated Security option to "Basic". You will also need to enter the User ID and Password of the user who has access to the Salesforce Connector. Step 7: When the connection to the Salesforce Connector is successful, click the Next button at the bottom of the window. Step 8: A table listing of the available tables will appear in the next window of the wizard. Here you will select which tables you want to import and click Finish. Step 9: If the import was successful, click Close and you are done! Your data is now in PowerPivot.

    Read the article

  • ODI 11g – How to override SQL at runtime?

    - by David Allan
    Following on from the posting some time back entitled ‘ODI 11g – Simple, Powerful, Flexible’ here we push the envelope even further. Rather than just having the SQL we override defined statically in the interface design we will have it configurable via a variable….at runtime. Imagine you have a well defined interface shape that you want to be fulfilled and that shape can be satisfied from a number of different sources that is what this allows - or the ability for one interface to consume data from many different places using variables. The cool thing about ODI’s reference API and this is that it can be fantastically flexible and useful. When I use the variable as the option value, and I execute the top level scenario that uses this temporary interface I get prompted (or can get prompted to be correct) for the value of the variable. Note I am using the <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@> notation for the table reference, since this is done at runtime, then the context will resolve to the correct table name etc. Each time I execute, I could use a different source provider (obviously some dependencies on KMs/technologies here). For example, the following groovy snippet first executes and the query uses SCOTT model with EMP, the next time it is from BOB model and the datastore OTHERS. m=new Properties(); m.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","EMP", "SCOTT","D")@>"); s=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s, null, "GLOBAL", 5, null, true); m2=new Properties(); m2.put("DEMO.SQLSTR", "select empno, deptno from <@=odiRef.getObjectName("L","OTHERS", "BOB","D")@>"); s2=new StartupParams(m); runtimeAgent.startScenario("TOP", null, s2, null, "GLOBAL", 5, null, true); You’ll need a patch to 11.1.1.6 for this type of capability, thanks to my ole buddy Ron Gonzalez from the Enterprise Management group for help pushing the envelope!

    Read the article

  • Come sfruttare le nuove dinamiche di relazione azienda-consumatore per ottimizzare l’esperienza multicanale e per rendere più efficiente il Customer Service creando e mantenendo la "brand promise"?

    - by Silvia Valgoi
    Scoprilo il prossimo 4 luglio a Milano! Oracle ha organizzato un workshop per condividere esperienze e casi sul tema Service Excellence. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In un mondo costantemente connesso dove le aspettative dei consumatori aumentano sempre di più un’area in cui le aziende possono veramente differenziarsi, mantenendo leadership e quote di mercato, è la Customer Service Experience che possono fornire. Ma come sfruttare queste nuove dinamiche di relazione azienda-singolo consumatore per ottimizzare l’esperienza multicanale e per render più efficiente il Customer Service creando valore e mantenendo la “brand promise”? Con il contributo di ASAP Service Management Forum, osservatorio privilegiato per le tematiche di Service, e con il contributo di testimonianze andremo a definire i percorsi da intraprendere o già intrapresi per sviluppare efficaci strategie di Customer Experience che tengano conto del ruolo cruciale che il consumatore ricopre quando interagisce con l’azienda. Non perdere questo appuntamento!  

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • How to warehouse data that is not needed from MS SQL server

    - by I__
    I have been asked to truncate a large table in MS SQL Server 2008. The data is not needed but might be needed once every two years. It will NEVER have to be changed, only viewed. The question is, since I don't need the data on a day-to-day basis, what do I do with it to protect and back it up? Please keep in mind that I will need to have it accessible maybe once every two years, and it is FINE for us if the recovery process takes a few hours. The entire table is about 3 million rows and I need to truncate it to about 1 million rows.

    Read the article

  • SQL SERVER – Four Tutorial for SQL Server 2012 New Features

    - by pinaldave
    One of the very common question I receive on my facebook is that if there is any tutorial for SQL Server 2012 new enhanced features and solutions. I see this demand a bit increasing as the SQL Server 2012 is more and more being adopted. Here is the list of four tutorial which is specifically created for SQL Server 2012 by Microsoft. Multidimensional Modeling (Adventure Works Tutorial) This tutorial teaches you how to develop and deploy an Analysis Services project that enables the employees of Adventure Works Cycles to analyze various aspects of their business. Tabular Modeling (Adventure Works Tutorial) This tutorial teaches you how to create a SQL Server 2012 Analysis Services tabular model that enable sales and marketing teams to easily analyze internet sales data in the AdventureWorksDW2012 data warehouse. You will build the tabular model in SQL Server Data Tools. Tutorials and Demos for Power View Create Power View reports and explore Power View features. View demos, videos, and tutorials that help you get started quickly with Power View and successfully build reports with interactive filters and visualizations such as bubble charts, tiles, and cards. Tutorial: Using the hierarchyid Data Type This tutorial is intended for users who are experienced with Transact-SQL, but are new to the hierarchyid data type. In this tutorial, you convert an existing table to a hierarchical structure, and you also create a new table to store and manage hierarchical data efficiently. Note: The description of the course is taken from original course description. You will need to install SQL Server 2012 AdventureWorks for all this tutorial. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Monitoring SQL Server Agent job run times

    - by okeofs
    Introduction A few months back, I was asked how long a particular nightly process took to run. It was a super question and the one thing that struck me was that there were a plethora of factors affecting the processing time. This said, I developed a query to ascertain process run times, the average nightly run times and applied some KPI’s to the end query. The end goal being to enable me to quickly detect anomalies and processes that are running beyond their normal times. As many of you are aware, most of the necessary data for this type of query, lies within the MSDB database. The core portion of the query is shown below.select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration) end as tt from dbo.sysjobs sj with (nolock) inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job' and [sh.Message] like '%The job%') Run_date and run_duration are obvious fields. The field ‘Name’ is the name of the job that we wish to follow. The only major challenge was that the format of the run duration which was not as ‘user friendly’ as I would have liked. As an example, the run duration 1 hour 10 minutes and 3 seconds would be displayed as 11003; whereas I wanted it to display this in a more user friendly manner as 01:10:03. In order to achieve this effect, we need to add leading zeros to the run_duration based upon the case logic shown above. At this point what we need to do add colons between the hours and minutes and one between the minutes and seconds. To achieve this I nested the query shown above (in purple) within a ‘super’ query. Thus the run time ([Run Time]) is constructed concatenating a series of substrings (See below in Blue). select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration,case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a Now that I had each nightly run time in hours, minutes and seconds (01:10:03), I decided that it would very productive to calculate a rolling run time average. To do this, I decided to do the calculations in base units of seconds. This said, I encapsulated the query shown above into a further ‘super’ query (see the code in RED below). This encapsulation is shown below. The astute reader will note that I used implied casting from integer to string, which is not the best method to use however it works. This said and if I were constructing the query again I would definitely do an explicit convert. To Recap: I now have a key field of ‘1’, each and every applicable run date and the total number of SECONDS that the process ran for each run date, all of this data within the #rawdata1 temporary table. Select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from ( select run_date,substring(convert(varchar(20),tt),1,2) + ':' + substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock)on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a )b   Calculating the average run time We now select each run time in seconds from #rawdata1 and place the values into another temporary table called #rawdata2. Once again we create a ‘key’, a hardwired ‘1’. select 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1The purpose of doing so is to make the average time AVG() available to the query immediately without having to do adverse grouping. Applying KPI Logic At this point, we shall apply some logic to determine whether processing times are within the norms. We do this by applying colour names. Obviously, this example is a super one for SSRS and traffic light icons.select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color], Calculating the Average Run Time in Hours Minutes and Seconds and the end of the query. casewhen len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date descThe complete code example use msdbgo/*drop table #rawdata1drop table #rawdata2go*/select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from (select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select name,run_date, run_duration, casewhenlen(run_duration) = 6 then convert(varchar(8),run_duration)whenlen(run_duration) = 5 then '0' + convert(varchar(8),run_duration)whenlen(run_duration) = 4 then '00' + convert(varchar(8),run_duration)whenlen(run_duration) = 3 then '000' + convert(varchar(8),run_duration)whenlen(run_duration) = 2 then '0000' + convert(varchar(8),run_duration)whenlen(run_duration) = 1 then '00000' + convert(varchar(8),run_duration)end as ttfrom dbo.sysjobs sj with (nolock)innerjoin dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where name = 'My Agent Job'and [Message] like '%The job%') a ) bselect 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color],Case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date desc  

    Read the article

  • In MySQL 5.1 InnoDB, does the maximum length of a VARCHAR affect secondary index size?

    - by e_tothe_ipi
    Assuming the data is the same either way, does the maximum length of the VARCHAR affect the space usage of a secondary index? Does InnoDB use fixed length records for indexes? Assume that we're talking about MySQL 5.1, with the InnoDB COMPRESSED table format and that the field in question is defined as a VARCHAR with some length less than or equal to 255 (so that it uses only one byte for the offset). Here is the use case: I have a server with a very large table (several gigabytes). One of the fields is currently VARCHAR(7). We need it a little longer and we are thinking of making it VARCHAR(255), but we are worried that it bloat the index.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >