Search Results

Search found 59060 results on 2363 pages for 'dummy data'.

Page 77/2363 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Displaying a Sorted, Paged, and Filtered Grid of Data in ASP.NET MVC

    Over the past couple of months I've authored five articles on displaying a grid of data in an ASP.NET MVC application. The first article in the series focused on simply displaying data. This was followed by articles showing how to sort, page, and filter a grid of data. We then examined how to both sort and page a single grid of data. This article looks at how to add the final piece to the puzzle: we'll see how to combine sorting, paging and filtering when displaying data in a single grid. Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • How to implement Self-host WCF data serivces (http://localhost:1234/myDataService.svc/...)

    - by warmcold
    I have a project that needs to implement WCF data services (OData) to retrieve data from a control system (.NET Framework Application). The WCF data service needs to be hosted by the .NET application (No ASP.NET and NO IIS). I have seen many WCF Data Service examples recently; they are all hosted by ASP.NET application. I also see the self-host (console application) examples, but it is for WCF Service (not WCF Data Service). Here is my question: It is possible to have a standalone .NET Applications to host WCF Data Services ((http://localhost:1234/mydataservice.svc/...). If yes, can someone provide an example? Thanks.

    Read the article

  • Writing a Data Access Layer (DAL) for SQL Server

    In this tip, I am going to show you how you can create a Data Access Layer (to store, retrieve and manage data in relational database) in ADO .NET. I will show how you can make it data provider independent, so that you don't have to re-write your data access layer if the data storage source changes and also you can reuse it in other applications that you develop. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • URGENT: Patches Needed to Prevent Data Corruption in Oracle Payments

    - by LuciaC
    Development are seeing a number of datafix bugs being logged related to PPR committing data in Payments (IBY) and missing corresponding payments in Payables.  These bugs have been investigated and fixed, however customers need to proactively apply these fixes to prevent data corruption. There are two root cause patches available for this case of partial data commit.  It is critical that all R12/12.1 Payments customers apply the following two patches ASAP: a) Patch 11699958: R12: Error during PPR Leads to Incomplete Data Commit and Inconsistent Status (Doc ID 1338425.1)b) Patches 15867522: Confirmed PPR Batches Show Payment Initiated - Data Exist Only in IBY Tables (Doc ID 1506611.1)

    Read the article

  • The Latest In Master Data Management

    Today master data continues to expand while data quality becomes more important. The challenge of clean data is not new, but the stakes and complexities are higher than ever. Fortunately, Oracle has a solution -- Oracle Master Data Management. Hear from Pascal Laik, VP Oracle MDM Product Strategy about the benefits of Master Data Management, the solutions that Oracle offers and why they are unique and what benefits customers are deriving from Oracle MDM products. Learn about the latest product in the Oracle MDM family and where Oracle MDM strategy is heading.

    Read the article

  • How to store and update data table on client side (iOS MMO)

    - by farseer2012
    Currently i'm developing an iOS MMO game with cocos2d-x, that game depends on many data tables(excel file) given by the designers. These tables contain data like how much gold/crystal will be cost when upgrade a building(barracks, laboratory etc..). We have about 10 tables, each have about 50 rows of data. My question is how to store those tables on client side and how to update them once they have been modified on server side? My opinion: use Sqlite to store data on client side, the server will parse the excel files and send the data to client with JSON format, then the client parse the JOSN string and save it to Sqlite file. Is there any better method? I find that some game stores csv files on client side, how do they update the files? Could server send a whole file directly to client?

    Read the article

  • Get your picture on the screen at MIX11: Help me create a repository of sample data

    - by Laurent Bugnion
    Here is your chance to get your picture on the big screen during my MIX11 presentation in April this year. I need to create a small repository of sample data for my demos. So instead of tapping in my imagination and creating dummy users (or reusing past information I already used in other demos), I thought I would appeal to the amazing community: Send me an email with the following information. I will include the first 30 users into my sample data repository and use your info in my demo. First Name Last Name Date of birth Picture Link to Facebook profile (optional) Disclaimer: The data will only be running locally on my hard drive. The demos will however be filmed and the videos made public. By providing this information, you explicitly consent to this data being used in demos at MIX11 and possibly in following conferences. The data will only be used for demo purposes. Thanks for your help!!   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Using VBA to model data in Autodesk Inventor?

    - by user108478
    I have a close friend who is using a specific device that records the dimensions of an object as it is eroded and outputs the dimensional data to an excel sheet. The object is spherical in nature but is eroded from the top and bottom, so the shape is constantly changing and a single formula for surface area and volume would not work. This is where Inventor comes in. My friend can plug the dimensional data to Inventor and it immediately returns the surface area and volume. The erosion process takes several minutes to complete and records data at very short intervals, so it would be very arduous to plug in the data thousand of time. Since Inventor supports macros and VBA, is there a way to plug the data into Inventor and output it into another spreadsheet? Any suggestions would be appreciated.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Validating Data

      To continue our series lets look at data validation our business applications. Updating data is great, but when you enable data update you often need to check the data to ensure it is valid.  RIA Services as clean, prescriptive pattern for handling this.   First lets look at what you get for free.  The value for any field entered has to be valid for the range of that data type.  For example, you never need to write code to ensure someone didnt type is forty-two into...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Validating Data

      To continue our series lets look at data validation our business applications. Updating data is great, but when you enable data update you often need to check the data to ensure it is valid.  RIA Services as clean, prescriptive pattern for handling this.   First lets look at what you get for free.  The value for any field entered has to be valid for the range of that data type.  For example, you never need to write code to ensure someone didnt type is forty-two into...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to deal with a ten days debugging session? [on hold]

    - by smonff
    Ten days ago, I fixed a bug on a large application and the hot fix has created a disappearing of some data from the user point of view. Data are not deleted, but have been set to hidden status. It could be possible to get the data back, but the thing seems to be hard: I've already spent 10 days to understand and reproduce the problem (mostly with complex SQL queries but sometimes it is necessary to update the database to test the application logic). Is 10 days a normal amount of time for these kind of problems? Should we keep on and retrieve the data or should we tell these users sorry for the loss, but your data have disappeared? Any advice on when to stop searching how to solve the issue?

    Read the article

  • Apache virtual host does not work properly

    - by Jori
    I have read a lot of information all over the Internet regarding this subject, and can not figure out what I'am doing wrong. I'm trying to host two websites under different names locally under Windows 7 with Apaches Virtual Hosting functionality. This is what I have done already: In the httpd.conf file I uncommented the following line, so that the virtual host configuration file will be included in the main configuration sequence. # Virtual hosts Include conf/extra/httpd-vhosts.conf This is how I edited my httpd-vhosts.conf: # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host.localhost" # ServerName dummy-host.localhost # ServerAlias www.dummy-host.localhost # ErrorLog "logs/dummy-host.localhost-error.log" # CustomLog "logs/dummy-host.localhost-access.log" common #</VirtualHost> # #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot "C:/apache/docs/dummy-host2.localhost" # ServerName dummy-host2.localhost # ErrorLog "logs/dummy-host2.localhost-error.log" # CustomLog "logs/dummy-host2.localhost-access.log" common #</VirtualHost> <VirtualHost *:80> ServerName arterieur DocumentRoot "J:/webcontent/www20" <Directory "J:/webcontent/www20"> Order allow,deny Allow from all </Directory> </VirtualHost> As you can see I commented the Virtual Host examples out and added my own one (I did one for this example). Also am I sure that J:\webcontent\www20 exists. At last I edited the Windows host file located in: C:\Windows\System32\drivers\etc\hosts, now it looks this: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 arterieur Then I restarted Apache with the Apache Service Monitor, and it gave me the following fatal error: The requested operation has failed!, I tried to look at the apache/logs/error.log file but I did not log anything, I guess it only logs the errors after startup. Does anyone knows what I'am doing wrong?

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • Extract data from specific range of cells in multiple worksheet in multiple files.

    - by Michele
    Extract data from specific range of cells(always the same cells) in multiple worksheet in multiple files. 1 file=1 day. I have 6 technicians each day of the week, Monday thru Friday. So, 5 files with 6 worksheets. I have entered specific info in specific cells of every work sheet. The range is constant(the same address in EVERY worksheet in every file.) So, I need a formula to extract and calculate the data in the given range and dump it into another spreadsheet. I can forward an example a file if it will help anyone to answer my question. Or more explanation if necessary is available upon request. JUST PLEASE SOMEBODY HELP ME!!!!! Thank you all in advance. Regards, Michele

    Read the article

  • How can I recover [data from] my failing USB key?

    - by moe37x3
    I have a Corsair Flash Voyager USB key, and it's almost completely failed. When I plug it into my [WinXP] computer, the OS mounts it and open up explorer to the drive's root directory. However, if I try to copy any data off, I get an error message saying that the device is not there. If I leave it plugged in, the OS seems to oscillate between seeing it and not seeing it, since the "Safely Remove Hardware" tray icon appears and disappears every few seconds. The damage was probably caused by my abuse, either from plugging it in with my keys hanging off of it or from losing the cap and keeping it in my pocket uncapped. Is there anything I can do to save the data from it or even rehabilitate the drive?

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • Changing Word mail merge data source locations in bulk?

    - by Daft Viking
    I've just moved a number of Word mail merge files, and a number of Excel spreadsheets that are the data sources for the mail merges, from a Windows XP computer to a Windows 7 computer, and now all the paths for the merge sources are incorrect (used to be c:\documents and settings\user\my documents.... now c:\users\documents....). While I can correct the path of the data source in each file individually, I was hoping that there would be some way of updating the files in bulk, as there are a relatively large number of them. Word 2007 is what is being used, but the documents are all in the previous DOC format (not DOCX).

    Read the article

  • Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder?

    - by Selin Peck
    Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder? I used to start office work by removing chrome User Data, replacing it with my own User Data copied from my external drive, saving the original User Data to other folder. Before leaving in the evening, I will take back my own User Data, and bring back the original User Data where it is originally saved. Is this process advisable? Would I be safe this way or if not, where else does chrome save data outside of User Data folder in AppData? Also, how is the process in Mozilla Firefox?

    Read the article

  • How do I populate multiple records of data into a PDF form like a mail-merge?

    - by user38801
    I have Acrobat Pro, and I have a PDF with a form on it. Assuming the fields in the form correspond to a data source (like rows in an RDBMS table or xml file), I want to then print multiple copies of the PDF file, with each copy having the values of a different row in the data source. It is preferable to directly interface with an actual database, rather than having to save an XML file every time I do this. If this involves programming that's cool too, I only posted here because the question didn't seem appropriate for StackOverflow. Thanks!

    Read the article

  • Ways to improve completeness of files for data recovery and scanning?

    - by SteveO
    I am using R-studio for data recovery on one of my ntfs partition. There is a pdf file about 16MB, but the software can only recover 15MB of it. So I am thinking about what ways can be used to improve the quality of scanning and recovery by the software? I am looking around its preferences. I am not quite sure whether there are some adjustable parameters for scanning and recovery which can be fine-tuned to improve the quality? R-studio has a free demo version, for which scanning is free,but recovery isn't. It is downloadable from http://www.data-recovery-software.net/Data_Recovery_Download.shtml Its manual is here http://www.r-tt.com/downloads/Recovery_Manual.pdf. I have tried my best to search for answers in the manual, but failed to find one. Their technical support is not as good as their software, and helpless usually in my opinion. Thanks!

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • Join our webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Data integration team has organized a series of webcasts for this summer. We are kicking it off this Thursday June 30th at 10am PT with a product update webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate. In this webcast you will hear from product management about the new patch updates to both GoldenGate 11g R1 and ODI 11gR1. Jeff Pollock, Sr. Director of Product Management for ODI will talk about the new features in Oracle Data Integrator 11.1.1.5, including the data lineage integration with OBI EE, enhanced web services to support flexible architectures as well as capabilities for efficient object execution such as Load Plans. Jeff will discuss support for complex files and performance enhancements. Chris McAllister, Sr. Director of Product Management for Oracle GoldenGate will cover the new features of Oracle GoldenGate 11.1.1.1 such as increased data security by supporting Oracle Database Advanced Security option, deeper integration with Oracle Database, and the expanded list of heterogeneous databases GoldenGate supports . Chris will also talk about the new Oracle GoldenGate 11gR1 release for HP NonStop platform and will provide information on our strategic direction for product development. Join us this Thursday at 10am PT/ 1pm ET to hear directly from Data Integration Product Management . You can register here for the June 30th webcast as well as for the upcoming ones in our summer webcast series.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >