Search Results

Search found 5845 results on 234 pages for 'commit protocol'.

Page 37/234 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • GConf Error: No D-BUS daemon running?! How to reinstall or fix?

    - by v2r
    After installing Konquerer and restarting my laptop i got the following error while trying to open, edit or access files as root from within Terminal (which is existencial for me). root@linuxBox:/home/v2r# gnome-open /home/ (gnome-open:2686): GConf-WARNING **: Client failed to connect to the D-BUS daemon: //bin/dbus-launch terminated abnormally with the following error: No protocol specified Autolaunch error: X11 initialization failed. GConf Error: No D-BUS daemon running root@linuxBox:/home/v2r# No protocol specified Could not parse arguments: Cannot open display: Also it seems, that dbus is not installed properly anymore in /bin/ and /usr/bin/ See screenshot: How would i go about fixing this problem and thank you in advance?!!! Thank you for your answer SirCharlo! It does not resolve the problem at all. Please note, that it only happens while beeing root! root@linuxBox:/home/v2r# gnome-open /home/ (gnome-open:5170): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Failed to connect to socket /tmp/dbus-2RdCUjrZ9k: Connection refused GConf Error: No D-BUS daemon running root@linuxBox:/home/v2r# No protocol specified Could not parse arguments: Cannot open display:

    Read the article

  • Switch from encrypted partition to unencrypted (Error: cryptsetup: evms_activate is not available)

    - by Chris Lercher
    I initially installed Ubuntu 11.04 with an encrypted file system (from the alternate install CD: Guided Partitioning, LVM encrypted). Now I wanted to change this setup to have my root file system on an unencrypted partition. I had the following setup before: /dev/mapper/my-root on / type ext4 (rw,noatime,errors=remount-ro,commit=0,commit=0) /dev/sda1 on /boot type ext2 (rw,noatime) I backed up /, reformatted /dev/sda5 (which had contained the encrypted LVM device) to an ext3 partition, and restored / to that partition. I edited /etc/fstab, removed the line /dev/mapper/my-root / ..., and added the line: /dev/sda5 / ext3 noatime,rw,errors=remount-ro,commit=0 0 1 I edited /etc/crypttab, and commented out the single entry. On reboot, I get the grub screen as usual, but then I get the message cryptsetup:evms_activate is not available, waiting for encrypted source device. I tried reinstalling Grub2 using a LiveCD with the ChRoot method, but that didn't make any difference. Why is Ubuntu still searching for an encrypted device?

    Read the article

  • Is committing/checking code everyday a good practice?

    - by ArtB
    I've been reading Martin Fowler's note on Continuous Integration and he lists as a must "Everyone Commits To the Mainline Every Day". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. Question: is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence.

    Read the article

  • Changing Filename Case with TortoiseSVN on Windows [migrated]

    - by Brad Turner
    I've been working on a development project using a Windows machine as a test server. Eventually, I'd like the "live" version to end up on a Linux machine. While trying to test on the Linux machine, it became apparent that I needed to change the case of several file names as Windows was case insensitive but Linux wasn't. When I changed the file name case in Windows, TortoiseSVN recognized that the file hand changed and marked my folders appropriately. However, when I tried to commit my changes, not only did TortoiseSVN tell me that no changes had been made, but it had actually reverted all of the file name changes I had made back to their original case. My question is, is there a simple way to alter the file name case from a Windows PC and have the changes appear in my repository? I'd like to avoid any kind of delete, commit, replace, commit scenario to keep my commits tidy if possible. Thanks!

    Read the article

  • In centralized version control, is it always good to update often?

    - by janos
    Assuming that: You are in a team developing some software. Your team is using centralized version control in the development process. You are working on a new feature which will surely take several days to complete, and you won't be able to commit before that because it would break the build. Your team members commit something every day that affects some of the files you're working with for your fancy new feature. Since this is centralized version control, you will have to update your local checkout at some point: at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Can we say that it is always a good idea to update often?

    Read the article

  • Nullmailer in /var/log/syslog

    - by Fluffy
    I'm getting a lot of messages like these: me@home:/etc/snmp$ tail /var/log/syslog Jun 12 17:52:15 home nullmailer[1238]: Starting delivery: protocol: smtp host: mail. file: 1339502401.24665 Jun 12 17:52:15 home nullmailer[7086]: smtp: Failed: Connect failed Jun 12 17:52:15 home nullmailer[1238]: Sending failed: Host not found Jun 12 17:52:15 home nullmailer[1238]: Starting delivery: protocol: smtp host: mail. file: 1339174804.27614 Jun 12 17:52:15 home nullmailer[7087]: smtp: Failed: Connect failed Jun 12 17:52:15 home nullmailer[1238]: Sending failed: Host not found Jun 12 17:52:15 home nullmailer[1238]: Starting delivery: protocol: smtp host: mail. file: 1339324201.21737 Jun 12 17:52:15 home nullmailer[7088]: smtp: Failed: Connect failed Jun 12 17:52:15 home nullmailer[1238]: Sending failed: Host not found Jun 12 17:52:15 home nullmailer[1238]: Delivery complete, 331 message(s) remain. The problem is, I don't recall sending anything. How do I find out which software is sending these messages? How do I read them?

    Read the article

  • Did a bunch of wrong work, should I keep it?

    - by Droogans
    I have forked a repo and branched that clone to code a story, and because I didn't understand the problem, wrote code that isn't solving my task at hand, but may prove useful later. Should I: Delete it, and don't worry about it. Then commit without the extra code. Make yet another branch for just that work, commit it, but don't post a pull request on it. Just commit it with the existing code, and worry about the extra "fluff" later. I was thinking #2. If I understand correctly, I could just keep the extra code on a branch I never use on my clone, and dig it up later if something comes up that may benefit from using it.

    Read the article

  • ASP.NET MVC 3 Hosting :: How to Deploy Web Apps Using ASP.NET MVC 3, Razor and EF Code First - Part I

    - by mbridge
    First, you can download the source code from http://efmvc.codeplex.com. The following frameworks will be used for this step by step tutorial. public class Category {     public int CategoryId { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public virtual ICollection<Expense> Expenses { get; set; } } Expense Class public class Expense {             public int ExpenseId { get; set; }            public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }     public int CategoryId { get; set; }     public virtual Category Category { get; set; } }    Define Domain Model Let’s create domain model for our simple web application Category Class We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category. In this post, we will be focusing on CRUD operations for the entity Category and will be working on the Expense entity with a View Model object in the later post. And the source code for this application will be refactored over time. The above entities are very simple POCO (Plain Old CLR Object) classes and the entity Category is decorated with validation attributes in the System.ComponentModel.DataAnnotations namespace. Now we want to use these entities for defining model objects for the Entity Framework 4. Using the Code First approach of Entity Framework, we can first define the entities by simply writing POCO classes without any coupling with any API or database library. This approach lets you focus on domain model which will enable Domain-Driven Development for applications. EF code first support is currently enabled with a separate API that is runs on top of the Entity Framework 4. EF Code First is reached CTP 5 when I am writing this article. Creating Context Class for Entity Framework We have created our domain model and let’s create a class in order to working with Entity Framework Code First. For this, you have to download EF Code First CTP 5 and add reference to the assembly EntitFramework.dll. You can also use NuGet to download add reference to EEF Code First. public class MyFinanceContext : DbContext {     public MyFinanceContext() : base("MyFinance") { }     public DbSet<Category> Categories { get; set; }     public DbSet<Expense> Expenses { get; set; }         }   The above class MyFinanceContext is derived from DbContext that can connect your model classes to a database. The MyFinanceContext class is mapping our Category and Expense class into database tables Categories and Expenses using DbSet<TEntity> where TEntity is any POCO class. When we are running the application at first time, it will automatically create the database. EF code-first look for a connection string in web.config or app.config that has the same name as the dbcontext class. If it is not find any connection string with the convention, it will automatically create database in local SQL Express database by default and the name of the database will be same name as the dbcontext class. You can also define the name of database in constructor of the the dbcontext class. Unlike NHibernate, we don’t have to use any XML based mapping files or Fluent interface for mapping between our model and database. The model classes of Code First are working on the basis of conventions and we can also use a fluent API to refine our model. The convention for primary key is ‘Id’ or ‘<class name>Id’.  If primary key properties are detected with type ‘int’, ‘long’ or ‘short’, they will automatically registered as identity columns in the database by default. Primary key detection is not case sensitive. We can define our model classes with validation attributes in the System.ComponentModel.DataAnnotations namespace and it automatically enforces validation rules when a model object is updated or saved. Generic Repository for EF Code First We have created model classes and dbcontext class. Now we have to create generic repository pattern for data persistence with EF code first. If you don’t know about the repository pattern, checkout Martin Fowler’s article on Repository Let’s create a generic repository to working with DbContext and DbSet generics. public interface IRepository<T> where T : class     {         void Add(T entity);         void Delete(T entity);         T GetById(long Id);         IEnumerable<T> All();     } RepositoryBasse – Generic Repository class protected MyFinanceContext Database {     get { return database ?? (database = DatabaseFactory.Get()); } } public virtual void Add(T entity) {     dbset.Add(entity);            }        public virtual void Delete(T entity) {     dbset.Remove(entity); }   public virtual T GetById(long id) {     return dbset.Find(id); }   public virtual IEnumerable<T> All() {     return dbset.ToList(); } } DatabaseFactory class public class DatabaseFactory : Disposable, IDatabaseFactory {     private MyFinanceContext database;     public MyFinanceContext Get()     {         return database ?? (database = new MyFinanceContext());     }     protected override void DisposeCore()     {         if (database != null)             database.Dispose();     } } Unit of Work If you are new to Unit of Work pattern, checkout Fowler’s article on Unit of Work . According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." Let’s create a class for handling Unit of Work public interface IUnitOfWork {     void Commit(); } UniOfWork class public class UnitOfWork : IUnitOfWork {     private readonly IDatabaseFactory databaseFactory;     private MyFinanceContext dataContext;       public UnitOfWork(IDatabaseFactory databaseFactory)     {         this.databaseFactory = databaseFactory;     }       protected MyFinanceContext DataContext     {         get { return dataContext ?? (dataContext = databaseFactory.Get()); }     }       public void Commit()     {         DataContext.Commit();     } } The Commit method of the UnitOfWork will call the commit method of MyFinanceContext class and it will execute the SaveChanges method of DbContext class.   Repository class for Category In this post, we will be focusing on the persistence against Category entity and will working on other entities in later post. Let’s create a repository for handling CRUD operations for Category using derive from a generic Repository RepositoryBase<T>. public class CategoryRepository: RepositoryBase<Category>, ICategoryRepository     {     public CategoryRepository(IDatabaseFactory databaseFactory)         : base(databaseFactory)         {         }                } public interface ICategoryRepository : IRepository<Category> { } If we need additional methods than generic repository for the Category, we can define in the CategoryRepository. Dependency Injection using Unity 2.0 If you are new to Inversion of Control/ Dependency Injection or Unity, please have a look on my articles at http://weblogs.asp.net/shijuvarghese/archive/tags/IoC/default.aspx. I want to create a custom lifetime manager for Unity to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName] = newValue;     }     public void Dispose()     {         RemoveValue();     } } Let’s create controller factory for Unity in the ASP.NET MVC 3 application.                 404, String.Format(                     "The controller for path '{0}' could not be found" +     "or it does not implement IController.",                 reqContext.HttpContext.Request.Path));       if (!typeof(IController).IsAssignableFrom(controllerType))         throw new ArgumentException(                 string.Format(                     "Type requested is not a controller: {0}",                     controllerType.Name),                     "controllerType");     try     {         controller= container.Resolve(controllerType) as IController;     }     catch (Exception ex)     {         throw new InvalidOperationException(String.Format(                                 "Error resolving controller {0}",                                 controllerType.Name), ex);     }     return controller; }   } Configure contract and concrete types in Unity Let’s configure our contract and concrete types in Unity for resolving our dependencies. private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()                 .RegisterType<IDatabaseFactory, DatabaseFactory>(new HttpContextLifetimeManager<IDatabaseFactory>())     .RegisterType<IUnitOfWork, UnitOfWork>(new HttpContextLifetimeManager<IUnitOfWork>())     .RegisterType<ICategoryRepository, CategoryRepository>(new HttpContextLifetimeManager<ICategoryRepository>());                 //Set container for Controller Factory                ControllerBuilder.Current.SetControllerFactory(             new UnityControllerFactory(container)); } In the above ConfigureUnity method, we are registering our types onto Unity container with custom lifetime manager HttpContextLifetimeManager. Let’s call ConfigureUnity method in the Global.asax.cs for set controller factory for Unity and configuring the types with Unity. protected void Application_Start() {     AreaRegistration.RegisterAllAreas();     RegisterGlobalFilters(GlobalFilters.Filters);     RegisterRoutes(RouteTable.Routes);     ConfigureUnity(); } Developing web application using ASP.NET MVC 3 We have created our domain model for our web application and also have created repositories and configured dependencies with Unity container. Now we have to create controller classes and views for doing CRUD operations against the Category entity. Let’s create controller class for Category Category Controller public class CategoryController : Controller {     private readonly ICategoryRepository categoryRepository;     private readonly IUnitOfWork unitOfWork;           public CategoryController(ICategoryRepository categoryRepository, IUnitOfWork unitOfWork)     {         this.categoryRepository = categoryRepository;         this.unitOfWork = unitOfWork;     }       public ActionResult Index()     {         var categories = categoryRepository.All();         return View(categories);     }     [HttpGet]     public ActionResult Edit(int id)     {         var category = categoryRepository.GetById(id);         return View(category);     }       [HttpPost]     public ActionResult Edit(int id, FormCollection collection)     {         var category = categoryRepository.GetById(id);         if (TryUpdateModel(category))         {             unitOfWork.Commit();             return RedirectToAction("Index");         }         else return View(category);                 }       [HttpGet]     public ActionResult Create()     {         var category = new Category();         return View(category);     }           [HttpPost]     public ActionResult Create(Category category)     {         if (!ModelState.IsValid)         {             return View("Create", category);         }                     categoryRepository.Add(category);         unitOfWork.Commit();         return RedirectToAction("Index");     }       [HttpPost]     public ActionResult Delete(int  id)     {         var category = categoryRepository.GetById(id);         categoryRepository.Delete(category);         unitOfWork.Commit();         var categories = categoryRepository.All();         return PartialView("CategoryList", categories);       }        } Creating Views in Razor Now we are going to create views in Razor for our ASP.NET MVC 3 application.  Let’s create a partial view CategoryList.cshtml for listing category information and providing link for Edit and Delete operations. CategoryList.cshtml @using MyFinance.Helpers; @using MyFinance.Domain; @model IEnumerable<Category>      <table>         <tr>         <th>Actions</th>         <th>Name</th>          <th>Description</th>         </tr>     @foreach (var item in Model) {             <tr>             <td>                 @Html.ActionLink("Edit", "Edit",new { id = item.CategoryId })                 @Ajax.ActionLink("Delete", "Delete", new { id = item.CategoryId }, new AjaxOptions { Confirm = "Delete Expense?", HttpMethod = "Post", UpdateTargetId = "divCategoryList" })                           </td>             <td>                 @item.Name             </td>             <td>                 @item.Description             </td>         </tr>         }       </table>     <p>         @Html.ActionLink("Create New", "Create")     </p> The delete link is providing Ajax functionality using the Ajax.ActionLink. This will call an Ajax request for Delete action method in the CategoryCotroller class. In the Delete action method, it will return Partial View CategoryList after deleting the record. We are using CategoryList view for the Ajax functionality and also for Index view using for displaying list of category information. Let’s create Index view using partial view CategoryList  Index.chtml @model IEnumerable<MyFinance.Domain.Category> @{     ViewBag.Title = "Index"; }    <h2>Category List</h2>    <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script>    <div id="divCategoryList">               @Html.Partial("CategoryList", Model) </div> We can call the partial views using Html.Partial helper method. Now we are going to create View pages for insert and update functionality for the Category. Both view pages are sharing common user interface for entering the category information. So I want to create an EditorTemplate for the Category information. We have to create the EditorTemplate with the same name of entity object so that we can refer it on view pages using @Html.EditorFor(model => model) . So let’s create template with name Category. Category.cshtml @model MyFinance.Domain.Category <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Description) </div> <div class="editor-field"> @Html.EditorFor(model => model.Description) @Html.ValidationMessageFor(model => model.Description) </div> Let’s create view page for insert Category information @model MyFinance.Domain.Category   @{     ViewBag.Title = "Save"; }   <h2>Create</h2>   <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>   @using (Html.BeginForm()) {     @Html.ValidationSummary(true)     <fieldset>         <legend>Category</legend>                @Html.EditorFor(model => model)               <p>             <input type="submit" value="Create" />         </p>     </fieldset> }   <div>     @Html.ActionLink("Back to List", "Index") </div> ViewStart file In Razor views, we can add a file named _viewstart.cshtml in the views directory  and this will be shared among the all views with in the Views directory. The below code in the _viewstart.cshtml, sets the Layout page for every Views in the Views folder.     @{     Layout = "~/Views/Shared/_Layout.cshtml"; } Tomorrow, we will cotinue the second part of this article. :)

    Read the article

  • X server with nvidia driver crashing: 12.04

    - by Raster
    My X server consistently crashes. It seems like this is happening when the X server is idle. This behaviour is new with 12.04. This is only happening on the second display of a multiseat system. Is there a configuration change I can make to stop this? X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-26-generic x86_64 Ubuntu Current Operating System: Linux Desktop 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-29-generic root=/dev/mapper/Group1-Root ro ramdisk_size=512000 quiet splash vt.handoff=7 Build Date: 04 August 2012 01:51:23AM xorg-server 2:1.11.4-0ubuntu10.7 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.1.log", Time: Sun Sep 2 22:37:06 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Backtrace: 0: /usr/bin/X (xorg_backtrace+0x26) [0x7fcee86be846] 1: /usr/bin/X (0x7fcee8536000+0x18c6ea) [0x7fcee86c26ea] 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fcee785c000+0xfcb0) [0x7fcee786bcb0] 3: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x902a9) [0x7fcee154a2a9] 4: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0xfd5e7) [0x7fcee15b75e7] 5: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d6b92) [0x7fcee1990b92] 6: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d74d5) [0x7fcee19914d5] 7: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d767d) [0x7fcee199167d] 8: /usr/bin/X (0x7fcee8536000+0x1196ec) [0x7fcee864f6ec] 9: /usr/bin/X (0x7fcee8536000+0xe8ad5) [0x7fcee861ead5] 10: /usr/bin/X (0x7fcee8536000+0xe9d45) [0x7fcee861fd45] /etc/X11/xorg.conf Section "ServerLayout" Identifier "Desktop" Screen 0 "DesktopScreen" 0 0 InputDevice "DesktopMouse" "CorePointer" InputDevice "DesktopKeyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "ServerLayout" Identifier "Desktop2" Screen 1 "Desktop2Screen" 0 0 InputDevice "Desktop2Mouse" "CorePointer" InputDevice "Desktop2Keyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "Files" EndSection Section "ServerFlags" Option "AutoAddDevices" "false" Option "AutoEnableDevices" "false" Option "AllowMouseOpenFail" "on" Option "AllowEmptyInput" "on" Option "ZapWarning" "on" Option "HandleSepcialKeys" "off" # Zapping on Option "DRI2" "on" Option "Xinerama" "0" EndSection # Desktop Mouse Section "InputDevice" Identifier "DesktopMouse" Driver "evdev" Option "Device" "/dev/input/event3" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection # Desktop2 Mouse Section "InputDevice" Identifier "Desktop2Mouse" Driver "evdev" Option "Device" "/dev/input/event5" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection Section "InputDevice" Identifier "DesktopKeyboard" Driver "evdev" Option "Device" "/dev/input/event4" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "InputDevice" Identifier "Desktop2Keyboard" Driver "evdev" Option "Device" "/dev/input/event6" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "Monitor" Identifier "Desktop2Monitor" VendorName "Acer" ModelName "Acer G235H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "DesktopMonitor" VendorName "Acer" ModelName "Acer H213H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "EVGACard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" Option "Coolbits" "1" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "XFXCard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 9800" Option "Coolbits" "1" BusID "PCI:5:0:0" Screen 0 EndSection Section "Screen" Identifier "DesktopScreen" Device "EVGACard" Monitor "DesktopMonitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Desktop2Screen" Device "XFXCard" Monitor "Desktop2Monitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

    - by Giri Mandalika
    Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view. THREADS parameter The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention. eg., Set a value for THREADS parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'THREADS'; COMMIT; (I am not aware of any maximum value for THREADS parameter) CHUNK_SIZE parameter The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time. When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted. It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found. eg., Set a value for CHUNK_SIZE parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'CHUNK_SIZE'; COMMIT; CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000. CHUNK SHUFFLE parameter By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example). It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured. eg., Set chunk shuffling as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = 'Y' WHERE PARAMETER_NAME = 'CHUNK SHUFFLE'; COMMIT; Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.     My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

    Read the article

  • HP to Cisco spanning tree root flapping

    - by Tim Brigham
    Per a recent question I recently configured both my HP (2x 2900) and Cisco (1x 3750) hardware to use MSTP for interoperability. I thought this was functional until I applied the change to the third device (HP switch 1 below) at which time the spanning tree root started flapping causing performance issues (5% packet loss) between my two HP switches. I'm not sure why. HP Switch 1 A4 connected to Cisco 1/0/1. HP Switch 2 B2 connected to Cisco 2/0/1. HP Switch 1 A2 connected to HP Switch 2 A1. I'd prefer the Cisco stack to act as the root. EDIT: There is one specific line - 'spanning-tree 1 path-cost 500000' in the HP switch 2 that I didn't add and was preexisting. I'm not sure if it could have the kind of impact that I'm describing. I'm more a security and monitoring guy then networking. EDIT 2: I'm starting to believe the problem lies in the fact that the value for my MST 0 instance on the Cisco is still at the default 32768. I worked up a diagram: This is based on every show command I could find for STP. I'll make this change after hours and see if it helps. Cisco 3750 Config: version 12.2 spanning-tree mode mst spanning-tree extend system-id spanning-tree mst configuration name mstp revision 1 instance 1 vlan 1, 40, 70, 100, 250 spanning-tree mst 1 priority 0 vlan internal allocation policy ascending interface TenGigabitEthernet1/1/1 switchport trunk encapsulation dot1q switchport mode trunk ! interface TenGigabitEthernet2/1/1 switchport trunk encapsulation dot1q switchport mode trunk ! interface Vlan1 no ip address ! interface Vlan100 ip address 192.168.100.253 255.255.255.0 ! Cisco 3750 show spanning tree: show spanning-tree MST0 Spanning tree enabled protocol mstp Root ID Priority 32768 Address 0004.ea84.5f80 Cost 200000 Port 53 (TenGigabitEthernet1/1/1) Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 32768 (priority 32768 sys-id-ext 0) Address a44c.11a6.7c80 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Te1/1/1 Root FWD 2000 128.53 P2p MST1 Spanning tree enabled protocol mstp Root ID Priority 1 Address a44c.11a6.7c80 This bridge is the root Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Bridge ID Priority 1 (priority 0 sys-id-ext 1) Address a44c.11a6.7c80 Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec Interface Role Sts Cost Prio.Nbr Type ------------------- ---- --- --------- -------- -------------------------------- Te1/1/1 Desg FWD 2000 128.53 P2p Cisco 3750 show logging: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan100, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to up %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan100, changed state to up %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to down %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to up HP Switch 1: ; J9049A Configuration Editor; Created on release #T.13.71 vlan 1 name "DEFAULT_VLAN" untagged 1-8,10,13-16,18-23,A1-A4 ip address 100.100.100.17 255.255.255.0 no untagged 9,11-12,17,24 exit vlan 100 name "192.168.100" untagged 9,11-12,17,24 tagged 1-8,10,13-16,18-23,A1-A4 no ip address exit vlan 21 name "Users_2" tagged 1,A1-A4 no ip address exit vlan 40 name "Cafe" tagged 1,4,7,A1-A4 no ip address exit vlan 250 name "Firewall" tagged 1,4,7,A1-A4 no ip address exit vlan 70 name "DMZ" tagged 1,4,7-8,13,A1-A4 no ip address exit spanning-tree spanning-tree config-name "mstp" spanning-tree config-revision 1 spanning-tree instance 1 vlan 1 40 70 100 250 password manager password operator HP Switch 1 show spanning tree: show spanning-tree Multiple Spanning Tree (MST) Information STP Enabled : Yes Force Version : MSTP-operation IST Mapped VLANs : 2-39,41-69,71-99,101-249,251-4094 Switch MAC Address : 0021f7-126580 Switch Priority : 32768 Max Age : 20 Max Hops : 20 Forward Delay : 15 Topology Change Count : 363,490 Time Since Last Change : 14 hours CST Root MAC Address : 0004ea-845f80 CST Root Priority : 32768 CST Root Path Cost : 200000 CST Root Port : 1 IST Regional Root MAC Address : 0021f7-126580 IST Regional Root Priority : 32768 IST Regional Root Path Cost : 0 IST Remaining Hops : 20 Root Guard Ports : TCN Guard Ports : BPDU Protected Ports : BPDU Filtered Ports : PVST Protected Ports : PVST Filtered Ports : | Prio | Designated Hello Port Type | Cost rity State | Bridge Time PtP Edge ----- --------- + --------- ---- ---------- + ------------- ---- --- ---- A1 | Auto 128 Disabled | A2 10GbE-CX4 | 2000 128 Forwarding | 0021f7-126580 2 Yes No A3 10GbE-CX4 | Auto 128 Disabled | A4 10GbE-SR | Auto 128 Disabled | HP Switch 1 Logging: I removed the date / time fields since they are inaccurate (no NTP configured on these switches) 00839 stp: MSTI 1 Root changed from 0:a44c11-a67c80 to 32768:0021f7-126580 00839 stp: MSTI 1 Root changed from 32768:0021f7-126580 to 0:a44c11-a67c80 00842 stp: MSTI 1 starved for an MSTI Msg Rx on port A4 from 0:a44c11-a67c80 00839 stp: MSTI 1 Root changed from 0:a44c11-a67c80 to 32768:0021f7-126580 00839 stp: MSTI 1 Root changed from 32768:0021f7-126580 to 0:a44c11-a67c80 00839 stp: MSTI 1 Root changed from 0:a44c11-a67c80 to ... HP Switch 2 Configuration: ; J9146A Configuration Editor; Created on release #W.14.49 vlan 1 name "DEFAULT_VLAN" untagged 1,3-17,21-24,A1-A2,B2 ip address 100.100.100.36 255.255.255.0 no untagged 2,18-20,B1 exit vlan 100 name "192.168.100" untagged 2,18-20 tagged 1,3-17,21-24,A1-A2,B1-B2 no ip address exit vlan 21 name "Users_2" tagged 1,A1-A2,B2 no ip address exit vlan 40 name "Cafe" tagged 1,13-14,16,A1-A2,B2 no ip address exit vlan 250 name "Firewall" tagged 1,13-14,16,A1-A2,B2 no ip address exit vlan 70 name "DMZ" tagged 1,13-14,16,A1-A2,B2 no ip address exit logging 192.168.100.18 spanning-tree spanning-tree 1 path-cost 500000 spanning-tree config-name "mstp" spanning-tree config-revision 1 spanning-tree instance 1 vlan 1 40 70 100 250 HP Switch 2 Spanning Tree: show spanning-tree Multiple Spanning Tree (MST) Information STP Enabled : Yes Force Version : MSTP-operation IST Mapped VLANs : 2-39,41-69,71-99,101-249,251-4094 Switch MAC Address : 0024a8-cd6000 Switch Priority : 32768 Max Age : 20 Max Hops : 20 Forward Delay : 15 Topology Change Count : 21,793 Time Since Last Change : 14 hours CST Root MAC Address : 0004ea-845f80 CST Root Priority : 32768 CST Root Path Cost : 200000 CST Root Port : A1 IST Regional Root MAC Address : 0021f7-126580 IST Regional Root Priority : 32768 IST Regional Root Path Cost : 2000 IST Remaining Hops : 19 Root Guard Ports : TCN Guard Ports : BPDU Protected Ports : BPDU Filtered Ports : PVST Protected Ports : PVST Filtered Ports : | Prio | Designated Hello Port Type | Cost rity State | Bridge Time PtP Edge ----- --------- + --------- ---- ---------- + ------------- ---- --- ---- A1 10GbE-CX4 | 2000 128 Forwarding | 0021f7-126580 2 Yes No A2 10GbE-CX4 | Auto 128 Disabled | B1 SFP+SR | 2000 128 Forwarding | 0024a8-cd6000 2 Yes No B2 | Auto 128 Disabled | HP Switch 2 Logging: I removed the date / time fields since they are inaccurate (no NTP configured on these switches) 00839 stp: CST Root changed from 32768:0021f7-126580 to 32768:0004ea-845f80 00839 stp: IST Root changed from 32768:0021f7-126580 to 32768:0024a8-cd6000 00839 stp: CST Root changed from 32768:0004ea-845f80 to 32768:0024a8-cd6000 00839 stp: CST Root changed from 32768:0024a8-cd6000 to 32768:0004ea-845f80 00839 stp: CST Root changed from 32768:0004ea-845f80 to 32768:0024a8-cd6000 00435 ports: port B1 is Blocked by STP 00839 stp: CST Root changed from 32768:0024a8-cd6000 to 32768:0021f7-126580 00839 stp: IST Root changed from 32768:0024a8-cd6000 to 32768:0021f7-126580 00839 stp: CST Root changed from 32768:0021f7-126580 to 32768:0004ea-845f80

    Read the article

  • How to use sudo with WinSCP and ProFTPd?

    - by Gaia
    I need to run the SFTP fileserver binary as root, but direct root login is not allowed. In WinSCP, if I use "default" on SFTP server protocol option everything works as expected. Following the instructions to sudo in WinSCP, I tried using "sudo /usr/sbin/proftpd" (works on the command line without any prompts) but it brings up "Cannot initialize SFTP protocol. Is the host running a SFTP server?" How to use sudo with WinSCP and ProFTPd? WinSCP 4.3.7 GUI Protocol: SFTP-3 CentOS 6.2 Webmin/Virtualmin (Current Version) PS: only cert based login is allowed . 2012-06-17 11:05:56.998 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.998 WinSCP Version 4.3.7 (Build 1679) (OS 6.1.7601 Service Pack 1) . 2012-06-17 11:05:56.998 Configuration: HKEY_CURRENT_USER\Software\Martin Prikryl\WinSCP 2\ . 2012-06-17 11:05:56.999 Login time: Sunday, June 17, 2012 11:05:56 AM . 2012-06-17 11:05:56.999 -------------------------------------------------------------------------- . 2012-06-17 11:05:56.999 Session name: KVM1 (Modified stored session) . 2012-06-17 11:05:57.047 Host name: mykvm.com (Port: 22) . 2012-06-17 11:05:57.048 User name: adminuser (Password: No, Key file: Yes) . 2012-06-17 11:05:57.048 Tunnel: No . 2012-06-17 11:05:57.048 Transfer Protocol: SFTP (SCP) . 2012-06-17 11:05:57.048 Ping type: -, Ping interval: 30 sec; Timeout: 15 sec . 2012-06-17 11:05:57.048 Proxy: none . 2012-06-17 11:05:57.048 SSH protocol version: 2; Compression: Yes . 2012-06-17 11:05:57.048 Bypass authentication: No . 2012-06-17 11:05:57.048 Try agent: Yes; Agent forwarding: No; TIS/CryptoCard: No; KI: Yes; GSSAPI: No . 2012-06-17 11:05:57.048 Ciphers: aes,blowfish,3des,WARN,arcfour,des; Ssh2DES: No . 2012-06-17 11:05:57.048 SSH Bugs: -,-,-,-,-,-,-,-,- . 2012-06-17 11:05:57.048 SFTP Bugs: -,- . 2012-06-17 11:05:57.048 Return code variable: Autodetect; Lookup user groups: Yes . 2012-06-17 11:05:57.048 Shell: default . 2012-06-17 11:05:57.048 EOL: 0, UTF: 2 . 2012-06-17 11:05:57.048 Clear aliases: Yes, Unset nat.vars: Yes, Resolve symlinks: Yes . 2012-06-17 11:05:57.048 LS: ls -la, Ign LS warn: Yes, Scp1 Comp: No . 2012-06-17 11:05:57.048 Local directory: default, Remote directory: home, Update: No, Cache: Yes . 2012-06-17 11:05:57.048 Cache directory changes: Yes, Permanent: Yes . 2012-06-17 11:05:57.048 DST mode: 1 . 2012-06-17 11:05:57.048 -------------------------------------------------------------------------- . 2012-06-17 11:05:57.113 Looking up host "mykvm.com" . 2012-06-17 11:05:57.132 Connecting to xxx.xxx.128.59 port 22 . 2012-06-17 11:05:57.499 Server version: SSH-2.0-OpenSSH_5.3 . 2012-06-17 11:05:57.499 Using SSH protocol version 2 . 2012-06-17 11:05:57.499 We claim version: SSH-2.0-WinSCP_release_4.3.7 . 2012-06-17 11:05:57.679 Server supports delayed compression; will try this later . 2012-06-17 11:05:57.679 Doing Diffie-Hellman group exchange . 2012-06-17 11:05:58.077 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:05:58.498 Host key fingerprint is: . 2012-06-17 11:05:58.498 ssh-rsa 2048 bd:e4:34:b1:d4:69:d6:4e:e4:26:04:8b:b7:b3:de:c3 . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:05:58.498 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:05:58.498 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:05:58.922 Reading private key file "D:\id_rsa.ppk" ! 2012-06-17 11:05:58.924 Using username "adminuser". . 2012-06-17 11:05:59.550 Offered public key . 2012-06-17 11:05:59.743 Offer of public key accepted ! 2012-06-17 11:05:59.743 Authenticating with public key "masterkey for admin" . 2012-06-17 11:05:59.764 Prompt (3, SSH key passphrase, , Passphrase for key "masterkey for admin": ) . 2012-06-17 11:06:02.938 Sent public key signature . 2012-06-17 11:06:03.352 Access granted . 2012-06-17 11:06:03.352 Initiating key re-exchange (enabling delayed compression) . 2012-06-17 11:06:03.765 Doing Diffie-Hellman group exchange . 2012-06-17 11:06:03.955 Doing Diffie-Hellman key exchange with hash SHA-1 . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR client->server encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 client->server MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) compression . 2012-06-17 11:06:04.410 Initialised AES-256 SDCTR server->client encryption . 2012-06-17 11:06:04.410 Initialised HMAC-SHA1 server->client MAC algorithm . 2012-06-17 11:06:04.410 Initialised zlib (RFC1950) decompression . 2012-06-17 11:06:04.839 Opened channel for session . 2012-06-17 11:06:05.247 Started a shell/command . 2012-06-17 11:06:05.253 -------------------------------------------------------------------------- . 2012-06-17 11:06:05.253 Using SFTP protocol. . 2012-06-17 11:06:05.253 Doing startup conversation with host. > 2012-06-17 11:06:05.259 Type: SSH_FXP_INIT, Size: 5, Number: -1 . 2012-06-17 11:06:05.354 Server sent command exit status 0 . 2012-06-17 11:06:05.354 Disconnected: All channels closed * 2012-06-17 11:06:05.380 (ESshFatal) Connection has been unexpectedly closed. Server sent command exit status 0. * 2012-06-17 11:06:05.380 Cannot initialize SFTP protocol. Is the host running a SFTP server?

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora S SL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • Looking for a communication framework for delphi

    - by Ryan
    I am looking for a communication framework for delphi, we know there are so many communication frameworks for other languages , wcf, ecf and so forth, but i have nerver found the one for delphi till now , anybody who knows about it can give me an ider? There are some requirements i need ,as follows: Building an application(server or client) without caring how to do communications with each other between two endpoints. Imagine that we use mailbox for exchanging messages,it seems that the communication is transparent. Supports communication protocol extending. We often need to exchange the messages between 2 devices, but the communication protocol is not a public or general one, so we need to extend the framework,to implement a communication protocol for receiving or sending a message completely. Supports asynchronous and synchronous communication Supports transmission protocol extending. The transmission protocol can implemented by winsocket, pipes, com, windows message, mailslot and so forth. In client application, we can write code snips like follows: var server: TDelphiCommunicationServer; session : ICommunicationSession; request, response: IMessage; begin session := server.CreateSession('IP', Port); request := TLoginRequest.Create; session.SynSendMessage(request); session.WaitForMessage(response, INFINITE); ....... end; In above code snips , TLoginRequest has implemented the message interface.

    Read the article

  • Keep alive using SIP in .net

    - by Ramesh Soni
    I am creating an application where I need to implement SIP protocol in .NET. We have Client-Server setup where client keeps on sending keep alive message to server. We can only use SIP protocol or any other protocol which is support with ICE. Could some one help me in implementing this. I don't have much idea about these protocols but I know .net very well. Some sample code would be of great help.

    Read the article

  • How do you fix an SVN 409 Conflict Error

    - by NerdStarGamer
    I used to use SVN 1.4 on OS X Leopard and everything was fine. A couple of weeks ago I installed a fresh copy of OS X 10.6. The version of SVN that comes with Snow Leopard is 1.6.5. I went ahead and built my own copy with 1.6.6. I'm using the built in apache server and just hosting repositories locally. Everything appeared to work fine until I actually tried to commit something. Everytime I try to commit a change, I get the following message: Transmitting file data .svn: Commit failed (details follow): svn: MERGE of '/svn/svn2': 409 Conflict (http://localhost) This happens with my old repositories, so I created a couple of new ones. Same deal. I also tried using the 1.6.5 version that comes with the system...same. Finally, I tried upgrading to the latest stable SVN (1.6.9) and still got the same problem. The Apache error logs the following for each failed commit: [Mon Mar 29 19:53:10 2010] [error] [client ::1] Could not MERGE resource "/svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1" into "/svn/svn2". [409, #0] [Mon Mar 29 19:53:10 2010] [error] [client ::1] An error occurred while committing the transaction. [409, #2] [Mon Mar 29 19:53:10 2010] [error] [client ::1] Can't open directory '/usr/local/svn/svn2/db/transactions/5-6.txn/\xeb\xa9\x0f\x1f': No such file or directory [409, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Could not DELETE /svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1. [500, #0] [Mon Mar 29 19:53:11 2010] [error] [client ::1] could not open transaction. [500, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Can't open file '/usr/local/svn/svn2/db/transactions/5-6.txn/props': No such file or directory [500, #2] And from the access log: ::1 - - [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 401 401 ::1 - user [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 200 188 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/vcc/default HTTP/1.1" 207 398 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/bln/6 HTTP/1.1" 207 449 ::1 - user [30/Mar/2010:13:02:20 -0400] "REPORT /svn/svn2/!svn/vcc/default HTTP/1.1" 200 1172 Curiously, the commit does actually commit the changes, but the working copy doesn't see that and everything gets screwy. I've tried to Google every variation I can think of for this problem, but the search results are pretty much useless. I'm not using TortoiseSVN or anything special and commits fail on a new repository, so I know it's not a problem with my old repos. Any help would be greatly appreciated.

    Read the article

  • What is the difference between AF_INET and PF_INET constants?

    - by Denilson Sá
    Looking at examples about socket programming, we can see that some people use AF_INET while others use PF_INET. In addition, sometimes both of them are used at the same example. The question is: Is there any difference between them? Which one should we use? If you can answer that, another question would be... Why there are these two similar (but equal) constants? What I've discovered, so far: The socket manpage In (Unix) socket programming, we have the socket() function that receives the following parameters: int socket(int domain, int type, int protocol); The manpage says: The domain argument specifies a communication domain; this selects the protocol family which will be used for communication. These families are defined in <sys/socket.h>. And the manpage cites AF_INET as well as some other AF_ constants for the domain parameter. Also, at the NOTES section of the same manpage, we can read: The manifest constants used under 4.x BSD for protocol families are PF_UNIX, PF_INET, etc., while AF_UNIX etc. are used for address families. However, already the BSD man page promises: "The protocol family generally is the same as the address family", and subsequent standards use AF_* everywhere. The C headers The sys/socket.h does not actually define those constants, but instead includes bits/socket.h. This file defines around 38 AF_ constants and 38 PF_ constants like this: #define PF_INET 2 /* IP protocol family. */ #define AF_INET PF_INET Python The Python socket module is very similar to the C API. However, there are many AF_ constants but only one PF_ constant (PF_PACKET). Thus, in Python we have no choice but use AF_INET. I think this decision to include only the AF_ constants follows one of the guiding principles: "There should be one-- and preferably only one --obvious way to do it." (The Zen of Python)

    Read the article

  • Browser Based Streaming Video/Audio (not progressive download)

    - by Josh
    Hello, I am trying to understand conceptually the best way to deliver real streaming audio and video content. I would want it to be consumed with a web browser, utilizing the least amount of proprietary technology. I wouldn't be serving static files and using progressive download, this would be real audio streams being captured live. How does one broadcast a stream that will be reasonably in sync with the source? What kind of protocol is suitable? Edit: In research I've found that there are a few protocols: RTSP, HTTP Streaming, RTMP, and RTP. HTTP streaming is somewhat unsuitable if you are streaming a live performance/communication of some kind because it relies on TCP (as its HTTP based) and you don't lose packets. In a low bandwidth situation, the client can get significantly behind in playback. ref RTMP is a proprietary technology, requiring flash media server. Crap on that. The reason I looked at flash is because they are extremely flexible as far as user experience goes. SoundManager2 provides an excellent javascript interface for playing media with flash. This is what I would look for in a client application. RTSP/RTP is what Microsoft switched to using, deprecating their MMS protocol. RTSP is the control protocol. Its similar to HTTP with a few distinct difference -- server can also talk to the client, and there are additional commands, like PAUSE. Its also a stateful protocol, which is maintained with a session id. RTP is the protocol for delivering the payload (encoded audio or video). There are a few open sourced projects, one of them being supported by apple here. It seems like this might do what I want it to, and it looks like quite a few players support it. It sounds like it would be suitable for a "live" broadcast from this page here. Thanks, Josh

    Read the article

  • Linq Query to Update Nested Array Items?

    - by Brett
    I have an object structure generated from xsd.exe. Roughly, it consists of 3 nested arrays: protocols, sources and reports. The xml looks like this: <protocols> <protocol> <source> <report /> <report /> </source> <source> <report /> <report /> </source> </protocol> <!-- more protocols --> </protocols> I need to update a single "Report" within the data structure. A brute force algorithm is shown below. I know that this could be done using XDocument and Linq, but I'd prefer to update the data structure and then serialize the structure back to disk. Thoughts? Brett bool updated = false; foreach (ProtocolsProtocol protocol in protocols.Protocol) { if (updated) break; foreach (ProtocolsProtocolSource source in protocol.Source) { if (updated) break; for (int i = 0; i < source.Report.Length; i++) { ProtocolsProtocolSourceReport currentReport = source.Report[i]; if (currentReport.Id == report.Id) { currentReport.Attribute1 = report.Attribute1; currentReport.Attribute2 = report.Attribute2; updated = true; break; } } } }

    Read the article

  • SSL HandShakeException: No_Certificate. Using IBM's J9 JVM and Apache Tomcat

    - by DaveJohnston
    I am developing a mobile application that is to run on a Windows Mobile PDA. The application is written in Java and to run it we are using the J9 JVM from IBM. The application communicates with an Apache Tomcat server over HTTP and we are trying to set it up now to use SSL. I have generated public/private keys for both the client and the server, exported their self signed certificates and imported them into the respective keystores. Initially I tried to just get it working using only server side authentication and that was successful. But I am now trying to get mutual authentication by setting clientAuth="true" in the server.xml file in the apache conf directory. I have enabled the SSL logging on the server and when the client connects the server reports an SSLProtocolException: handshake alert: no_certificate. The client logs also show an exception: javax.net.ssl.SSLHandshakeException: unexpected_message at com.ibm.j9.jsse.SSLSocketImpl.completeHandshake(Unknown Source) at com.ibm.j9.jsse.SSLSocketImpl.startHandshake(Unknown Source) at com.ibm.oti.net.www.protocol.https.HttpsURLConnection.openSocket(Unknown Source) at com.ibm.oti.net.www.protocol.https.HttpsURLConnection.connect(Unknown Source) at com.ibm.oti.net.www.protocol.https.HttpsURLConnection.sendRequest(Unknown Source) at com.ibm.oti.net.www.protocol.https.HttpsURLConnection.doRequest(Unknown Source) at com.ibm.oti.net.www.protocol.https.HttpsURLConnection.getInputStream(Unknown Source) The client keystore and truststore is configured by setting the following System Properties: javax.net.ssl.trustStore javax.net.ssl.trustStorePassword javax.net.ssl.keyStore javax.net.ssl.keyStorePassword Does anyone have any ideas how I can set up client authentication on the J9 JVM?

    Read the article

  • git strategy to have a set of commits limited to a particular branch

    - by becomingGuru
    I need to merge between dev and master frequently. I also have a commit that I need to apply to dev only, for things to work locally. Earlier I only merged from dev to master, so I had a branch production_changes that contained the "undo commit" of the dev special commit. and from the master, I merged this. Used to work fine. Now each time I merge from dev to master and vice versa, I am having to cherry-pick and apply the same commit again and again :(. Which is UGLY. What strategy can I adapt so that I can seamlessly merge between 2 branches, yet retain some of the changes only on one of those branches?

    Read the article

  • Why is the meaning of “ours” and “theirs” reversed with git-svn

    - by Marc Liyanage
    I use git-svn and I noticed that when I have to fix a merge conflict after performing a git svn rebase, the meaning of the --ours and --theirs options to e.g. git checkout is reversed. That is, if there's a conflict and I want to keep the version that came from the SVN server and throw away the changes I made locally, I have to use ours, when I would expect it to be theirs. Why is that? Example: mkdir test cd test svnadmin create svnrepo svn co file://$PWD/svnrepo svnwc cd svnwc echo foo > test.txt svn add test.txt svn ci -m 'svn commit 1' cd .. git svn clone file://$PWD/svnrepo gitwc cd svnwc echo bar > test.txt svn ci -m 'svn commit 2' cd .. cd gitwc echo baz > test.txt git commit -a -m 'git commit 1' git svn rebase git checkout --ours test.txt cat test.txt # shows "bar" but I expect "baz" git checkout --theirs test.txt cat test.txt # shows "baz" but I expect "bar"

    Read the article

  • Lua API for TokyoTyrant

    - by jideel
    Hi SO folks, I didn't managed to find an Lua client/api for TokyoTyrant. Such Api exists for TokyoCabinet, but not for TT. And Perl and Ruby API exists for TT. TT provides a native binary protocol, a memcached-compatible protocol, and an HTTP-oriented protocol. So my questions are : 1/ Do you think using the memcached (using luamemcached) or the HTTP protocol (using luaSocket) is "enough" for most / simple usage, and so a native Lua api is not necessary ? (the app is a simple uuid storage/distributor) ? 2/ Does it make sense to not use TokyoTyrant, but only TokyoCabinet, and use Lua at the application level to provide network and concurrent access to TC, using, say, Copas (Copas is , from their website, "a dispatcher based on coroutines that can be used by TCP/IP servers." ? Thanks.

    Read the article

  • Parsing tnsnames.ora using regex...

    - by Welton v3.50
    I am attempting to pull some information from my tnsnames file using regex. I started with the following pattern: MYSCHEMA *? = *?[\W\w\S\s]*\(HOST *?= *?(?<host>\w+\s?)\)\s?\(PORT *?= *?(?<port>\d+)\s?\)[\W\w\S\s]*\(SERVICE_NAME *?= *?(?<servicename>\w+)\s?\) which worked fine when MYSCHEMA was the only schema in the file, but when there are other schemas listed after MYSCHEMA it matches all the way to the last schema. I have since created a new pattern: MYSCHEMA *=\s*\(DESCRIPTION =\s*\(ADDRESS *= *\(PROTOCOL *= *TCP\)\(HOST *= *(?<host>\w+)\)\(PORT *= *(?<port>\d+)\)\)\s*\(CONNECT_DATA *=\s*(?<serverdedicated>\(SERVER *= *DEDICATED\))\s*\(SERVICE_NAME *= *(?<servicename>[\w\.]+) *\)\s*\)\s*\) This pattern matches MYSCHEMA only, but I had to add every element that appeared in MYSCHEMA entry, and it won't match MYOTHERSCHEMA if it does not contain all the same elements. Ideally, I'd like a pattern that matches MYSCHEMA entry only, and captures HOST, PORT and SERVICE NAME, and optionally (SERVER = DEDICATED) (which I didn't have in the first pattern) to named groups. Below is the sample tnsnames that I've been using for testing: SOMESCHEMA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = REMOTEHOST)(PORT = 1234)) ) (CONNECT_DATA = (SERVICE_NAME = REMOTE) ) ) MYSCHEMA = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = MYHOST)(PORT = 1234)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MYSERVICE.LOCAL ) ) ) MYOTHERSCHEMA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = MYHOST)(PORT = 1234)) ) (CONNECT_DATA = (SERVICE_NAME = MYSERVICE.REMOTE) ) ) SOMEOTHERSCHEMA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = LOCALHOST)(PORT = 1234)) ) (CONNECT_DATA = (SERVICE_NAME = LOCAL) ) )

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >