Search Results

Search found 3168 results on 127 pages for 'grand central dispatch'.

Page 49/127 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Setting up a git repository on a server

    - by lostInTransit
    Hi I read through the other git questions here but couldn't really follow whether they are trying to do the same thing as I am. So if you find any duplicates, please let me know. I have a central server with SSO installed. All my machines are connected through the lan to this server. I have also setup a remote git repository on this server. Now what I'd like to do is make the server act as a central repository. All my employees can commit their code to the server and the server pushes it to the remote git repository. Also can I integrate it with SSO in any way? Can someone please help me out with this process? I am new to git and still learning how to use it effectively. So a step-by-step process or an existing document which I can refer to for this? Thanks.

    Read the article

  • Keeping Private SSH Keys Safe

    - by Carmen
    I have a central server where I stored all the private ssh keys to the different machines that I want to ssh to. Currently, only sysadmins have access to this 'central' server. Given the above scenario, I like to ask the following questions: How do you protect your private ssh keys? I read about ssh-agent but I am not sure how to use it or if it can be used in this situation. If a sysadmin leaves and he copies all the private ssh keys, then he has access to all the servers. How do you deal with this situation?

    Read the article

  • Distributed Nagios Installation

    - by kruczkowski
    I'm looking for a plug-in or product that will act as a remote probe and perform tests then send back the results to the central Nagios server. Reason for this is that I'd like to monitor internal systems and servers at customers, but don't want to allow all the traffic passing the firewalls. Ideally I'd like a soft-probe that would be installed and then perform the tests and send back the results (via SSH) to the central Nagios installation. Does anyone know of a product or plug-in that would offer such service? If not Nagios, is there any other monitoring system that does such a thing (ideally open-source)?

    Read the article

  • Restarting rsyslog re-sends logs again

    - by Jay Taylor
    I am running Ubuntu 12.04.1 LTS on EC2. I have a bunch of application servers which are configured to forward their logs to a central server via rsyslog. Since putting in Nagios monitoring on the log files on the central server, I've been getting alerts indicating that particular application servers are failing to forward their logs to the centralized server. Logging into the machines and restarting the rsyslog service fixes the problem. However, rsyslog then re-transmits the logs again, resulting in duplicates on the collector. Why is it doing this?

    Read the article

  • Can I automatically add a new host to known_hosts ?

    - by gareth_bowles
    Here's my situation; I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via SSH. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts file on the central client. The problem I'm having is that the first SSH command run against a new virtual instance always comes up with an interactive prompt: The authenticity of host '[hostname] ([IP address])' can't be established. RSA key fingerprint is [key fingerprint]. Are you sure you want to continue connecting (yes/no)? Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • Sharepoint 2010 can't find domain users when granting permissions

    - by quani
    I'm trying to grant permissions to other people to view a SharePoint site but when granting permissions it uses "Check Names" and claims any user or group that is part of a domain does not exist. It does this if I try granting permissions to the team site or in central admin BUT if I try to add someone to Farm Administrators in Central admin then all of the sudden it can find all domain users. Why is it finding domain users in that one context but not others? It is supposed to be using NTLM authentication and has Windows configured as the authentication provider (And IIS is configured to use NTLM). What's even more strange is I enabled Anonymous Access for the team site which I thought would allow anyone to view it but others say they can't access it.

    Read the article

  • Write hash password to LDAP when creating a new user

    - by alibaba
    I am working on a project with a central user database system. One of the requirements of the system is that there should be only one set of users for all the application. FreeRADIUS and Samba are two my applications that both use LDAP as their backend. Since users must be the same for the entire system that contains many other applications, I have to read the list of users from the central database and recreate them in the LDAP directories for Samba and FreeRADIUS. The problem is that users are sent to me from another entity and I can save them in the database with their hash passwords. I don't have access to their cleartext passwords. I am wondering if I could enter directly a hash password for a new user in LDAP with my preferred hash mechanism. If not, can any one tell me what strategy I have to use? I am running my server on UBUNTU 12.04 and all other applications are the latest versions. My database system is PostgreSQL 9.2. Thank you

    Read the article

  • Shaping with shorewall complex shaper not work (or I don't understand principle of operation)

    - by strangeman
    I have router (Debian 6) with 2 network interfaces (and 1 virtual tun interface): eth0 - localnet, 192.168.1.0/24, router ip is 192.168.1.1 eth1 - internet tun0 - openvpn to central office. openvpn network - 10.1.0.0/24, central office network - 192.168.0.0/24 I need shape all traffic, which moves 192.168.1.0/24-192.168.0.1:6666 and 192.168.1.0/24<-192.168.0.1:6666, and restrict its speed to 200kbit. Now, I have this configuration, but its not work: tcdevices (set up interface parameters) #INTERFACE IN-BANDWITH OUT-BANDWIDTH eth0 100mbit 100mbit #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE tcrules (mark all traffic, which move on 6666 port) #MARK SOURCE DEST PROTO PORT(S) 1 0.0.0.0/0 0.0.0.0/0 tcp 6666 #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE tcclasses (shape all marked traffic) #INTERFACE MARK RATE CEIL PRIORITY OPTIONS eth0 1 200kbit 200kbit 2 eth0 255 9*full/10 full 1 default #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE Where is my mistake?

    Read the article

  • How to fix an endpoint/configuration error using WCF in VB.NET

    - by Eric
    I'm working with a small web page that is meant to assist the users of my application. This web page takes a file and sends it to a central server, which then does something with the data and returns a result. I created this application some time ago and am coming back to it recently. I am getting some kind of configuration error right now, although this application used to work. When it stopped working, whenever I ran the page and sent the data to the central server, I would get this error: "Could not find default endpoint element that references contract 'CentralService.ICwCentralService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element." Looking at some other issues on the net, I thought I might have had the answer. The service reference to the endpoint was contained in a separate project from the code that called it, but the configuration file in that project had no information about the endpoint. So, I added these entries to the web.config file in the main project: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="wsHttpEndpoint" closeTimeout="00:01:00" openTimeout="00:0:10" receiveTimeout="01:10:00" sendTimeout="01:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="999999999" maxReceivedMessageSize="999999999" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="999999999" maxStringContentLength="999999999" maxArrayLength="999999999" maxBytesPerRead="999999999" maxNameTableCharCount="999999999" /> <reliableSession ordered="true" inactivityTimeout="01:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:22269/CwCentralService.svc" binding="wsHttpBinding" bindingConfiguration="wsHttpEndpoint" contract="CentralService.ICwCentralService" name="wsHttpEndpoint"> <identity> <servicePrincipalName /> </identity> </endpoint> </client> </system.serviceModel> Now, if I run it, I'm still getting an error: "The remote server returned an unexpected response: (400) Bad Request." The strange thing is, though, I took those entries from another project that contacts the central server. That application has no problems contacting the central server using these settings. It's not a web page application, but I don't see how that would require these settings to change. I cannot tell what started causing these errors or when. I assume its something that changed outside of the application (e.g. the libraries referenced) that requires an update to the configuration in the application. I am currently using .NET 3.0 for all of my applications. Any help would be appreciated.

    Read the article

  • Oracle OpenWorld and JavaOne 2014 - Early Bird Registration

    - by Cinzia Mascanzoni
      #OOW14 Sponsor Oracle OpenWorld » Exhibit at Oracle OpenWorld » Don’t Miss Out on Early Bird Savings Oracle OpenWorld 2014 is several months away. So why register now, partners? Savings. And availability. Register early and you can secure your spot and hotel room for the world’s largest business and technology conference. Plus, you’ll save on sessions, keynotes, entertainment, and networking opportunities. Register Now Just What You'd Expect from Oracle OpenWorld. And More. You’re probably attending the conference for the IT programs and networking opportunities. You’ll find a wide selection. And that’s just the start. Because Oracle OpenWorld is more than just IT. Check out and benefit from all the conference activities, including benefits specific for Oracle PartnerNetwork (OPN) at OPN Central @ OpenWorld, including: Oracle OpenWorld Keynote OPN Keynote OPN General Sessions OPN AfterDark Reception OPN Central @ OpenWorld OPN Lounge Access Save Even More As a Group Are you planning to register five or more people for Oracle OpenWorld 2014? If so, take advantage of our Group Pass Purchase. Register a Group Today! SponsorOracle OpenWorld Get maximum exposure for your brand Find out how » Exhibit atOracle OpenWorld Meet your customers and prospects face-to-face. Reserve a booth now » Register for Oracle OpenWorld Today Learn more about Oracle OpenWorld   #OOW14 Copyright © 2014, Oracle Corporation and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Webcast - Oracle Database In-Memory Option

    - by Thanos Terentes Printzios
    Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can learn more about the brand new Oracle Database 12c In-Memory option. Oracle Database In-Memory is Oracle’s new memory-optimized technology that transparently accelerates analytic, data warehousing, and reporting workloads, while also accelerating transaction processing (OLTP) workloads. Participants will learn about Oracle Database In-Memory benefits, features, and leading edge architecture.  The Database In-Memory architecture provides the ability to easily process data orders of magnitude faster by simply enabling the feature and identifying tables to bring in-memory without application changes. Details on Oracle Database In-Memory’s ease of use and management, scalability, and availability will also be covered. Please join us to learn more about Oracle Database In-Memory and get first-hand knowledge of this important new feature. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web.These Oracle webcasts are FREE for Customers, System Integrators, ISVs, VARs and Platform Partners. Presenter: Richard Jacobs, Oracle Solution Architect  Europe Webcast 1 Date: August 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here! Europe Webcast 2 Date: September 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here!

    Read the article

  • Installing 12.04 within 11.04

    - by user288752
    I recently installed 11.04 from an installation disk (overwriting Windows in the process). I know 11.04 is no longer supported but I had no problems subsequently upgrading it to 12.04 (via 11.10) a couple of months ago on another device. This time though, things are different. I can't upgrade through update manager because Ubuntu then tells me I have no internet connection, which is (obviously incorrect). I have tried to circumvent the problem by downloading the 12:04 iso from ubuntu.com directly but now I'm troubled by something else. The download is succesfull but after mounting the iso I can't interact with it. When I try to access the Wubi it gives me the following message: Archive: /home/lars/.cache/.fr-7g75Fe/wubi.exe [/home/lars/.cache/.fr-7g75Fe/wubi.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /home/lars/.cache/.fr-7g75Fe/wubi.exe or /home/lars/.cache/.fr-7g75Fe/wubi.exe.zip, and cannot find /home/lars/.cache/.fr-7g75Fe/wubi.exe.ZIP, period. What am I doing wrong here?

    Read the article

  • SQL Azure Service Issues &ndash; 10.27.2012 (Restored Now)

    - by ToStringTheory
    Please note that if you have a Windows Azure website, or use SQL Azure, your site may be experiencing downtime currently.  Notice I just called in regarding one of my public facing internet sites, because the site was failing to load anything but its error page, I couldn’t connect to the database to inspect application error logs, and the Windows Azure Management portal won’t load the SQL Azure extension. After speaking to the representative, he also mentioned that they were also having some problems updating the Service Dashboard which shows service up/down time, and for now, they are posting messages at http://account.windowsazure.com.  Please note that this issue may only be effecting certain regions.  Last, I may have misheard the representative, but he said that the outage was being categorized as a level 8, and if I heard correctly, I think he said that level 8 was the worst level.  I can’t say for sure on this though, because the phone connection to their support number was bad – large amounts of white noise. Good Luck! Update It appears that this outage may also be effecting the following services: SQL Database, Service Bus, Datamarket, Windows Azure Marketplace, Shared Caching, Access Control 2.0, and SQL Reporting. The note on the account page says for the South Central US region, however, I believe the representative I spoke to also mentioned North Central. As I said before though, the connection was bad. Update 2 My site regained connectivity about an hour ago, and it appears that the service dashboard is back in operation with correct status and history. It does appear that I misheard on the phone regarding multiple regions, so chances are this only effected a percentage of the platform. All in all, if this WAS their worst level of a problem, they really got it fixed and back up pretty fast. All in all, I understand that it is inherent for a complex system such as Azure to have ups and downs, but at the end of the day, I am still happy to support Azure to its fullest!

    Read the article

  • Where did ULSTraceLog go to in the SharePoint 2010 Logging Database?

    - by Jan Tielens
    The Logging Database is one of the many new concepts that will make the life of many SharePoint administrators quite a bit more enjoyable. In SharePoint 2007 the Unified Logging System (ULS) logged all of its data to text files, typically found on your SharePoint server in 12\LOGS. We still have that in SharePoint 2010, but besides those text files, ULS can also write the data to a database! The advantages are obvious: easy to query, one central location for all servers in the farm, easy to build reports etc. You can find this ULS data in the SharePoint 2010 logging database (typically called WSS_Logging), in the view ULSTraceLog. Quite recently on one of my demo machines (standalone installation on Windows 7) I noticed the ULSTraceLog view was not available in the logging database. It turned out that there is a Timer Job that’s responsible for writing the data to the database, when the Timer Job hasn’t executed, the view is not there (the first time it executes, the view is created). Even more, the timer job was disabled, so the view would never be created, nor any data would be written to the database. If you encounter this situation as well, it’s quite easy to solve: Open the SharePoint Central Administration site Navigate to the Monitoring section Select Review Job Definitions Click on the job with the name Diagnostic Data Provider: Trace Log Click on the Enable button to enable it Optionally click on Run Now afterwards, to start it immediately There you go, the ULSTraceLog will be created and the ULS messages will appear in the database!

    Read the article

  • The PATRIOT Act and how it relates to the Internet

    The subject of the Internet and anonymity is a very sticky situation for me because I primarily develop web applications for a living.  As a part of my job I have to track users as they enter, navigate and leave specific applications. The level of tracking depends on where the user goes within a website.  The basic information that I capture includes the user’s IP address, browser type, operating system, the date/time they entered the site and the URL from which the user was referred to the website. In addition to the custom logging that is placed on the website, web servers also have methods of logging built-in as well. Web server logging allows companies to have a central repository to store all user activity across the entire server. Not to mention that they can also create a central repository that allows multiple servers to store log files in one location. This allows users to be tracked across multiple servers as they browse website located on a specific collection of servers that host multiple websites. All this being said there are methods to attempt to protect your privacy by using proxy servers and increasing your browser security levels, but that will only limit the amount of logging not eliminate it. I have to agree with Traynor when he states that the PATRIOT Act eviscerates the constitutional protections of anonymous communication on the Internet. Therefore, given the recent passage and implementation of the PATRIOT Act, the constitutional guarantees of the right to anonymity have been severely compromised. I think that the PATRIOT Act is a direct violation of our first amendment rights because it allows for the government to directly monitor any and all activity on the internet including communications, usage, and transactions that can occur.  This opens the door to scrutiny and persecution of individuals who are not in line with the government’s beliefs and actions. If England had this type of monitoring capabilities during the revolutionary war, I believe it would have been almost impossible to succeed from England.

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • Read-only lock on a SharePoint site collection, or Why can't I edit anymore?

    - by PeterBrunone
    Monday morning, the calls started.  For some reason, long-time users were unable to edit list items.  I figured we had a permissions issue, so I popped in to look at the Site Settings -- and found that I couldn't.  A quick trip to Central Administration showed that I was still listed as a Site Collection Administrator, but I had no power at all on the site collection in question.A quick glance at the logs told me that the server had recently shut down unexpectedly (this is a Hyper-V virtual machine).  Apparently, in the confusion, somehow SharePoint decided to lock the site collection as Read Only.  This can be remedied in one of two ways:1)  In Central Administration, go to Application Management->SharePoint Site Management->Site collection quotas and locks.  Once you have arrived, select the correct application and site collection, and you will have the opportunity to view and set the lock status of the collection (it most likely will be set to "Read-only", and you'll want to move that radio button to "Not locked").2)  Fire up stsadm and issue the following command:stsadm -o setsitelock -url http://myportalsitecollection -lock none

    Read the article

  • Tracking contributions from contributors not using git

    - by alex.jordan
    I have a central git repo located on a server. I have many contributors that are not tech savvy, do not have server access, and do not know anything about git. But they are able to contribute via the project's web side. Each of them logs on via a web browser and contributes to the project. I have set things up so that when they log on, each user's contributions are made into a cloned repo on the server that is specifically for that user. Periodically, I log on to the server, visit each of their repos, and do a git diff to make sure they haven't done anything bad. If all is well, I commit their changes and push them to the central repo. Of course I need to manually look at their changes so that I can add an appropriate commit message. But I would also like to track who made the changes. I am making the commit, and I (and the web server) are the only users that are actually writing anything to the server. I could track this in the commit messages. While this strikes me as wrong, if this is my only option, is there a way to make userx's cloned repo always include "userx: " before each commit message that I add, so that I do not have to remind myself which user's repo I am in? Or even better, is there an easy way for me to make the commit, but in such a way as I credit the user whose cloned repo I am in?

    Read the article

  • "sha256sum mismatch jdk-7u3-linux-x64.tar.gz " error when trying to install Oracle Java

    - by Fawkes5
    i recently tried installed java 7 on ubuntu 12.04 and i think i screwed something up I followed the instructions given here. First you need to remove openjdk for this run the following command from your terminal sudo apt-get purge openjdk* Now you can install Java7 by adding the following repository: sudo add-apt-repository ppa:eugenesan/java sudo apt-get update sudo apt-get install oracle-java7-installer Now everytime i install a new program i get the following error: Download done. sha256sum mismatch jdk-7u3-linux-x64.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (--configure): subprocess installed post-installation script returned error exit status 1 Setting up python-central (0.6.17ubuntu1) ... Setting up python-eggtrayicon (2.25.3-11) ... Setting up gmail-notify (1.6.1.1-1ubuntu1) ... Processing triggers for python-central ... Errors were encountered while processing: oracle-java7-installer Error in function: However.The program seems to install and work just fine so it doesn't seem to be a problem preventing me from doing anything So then i reinstalled openjdk by going: sudo apt-get install openjdk* But i still get the same error. going: sudo apt-get install oracle-java7-installer gives me the same error. What is going on? Please let me know if this is clear or not and ill try to explain my issue better

    Read the article

  • Obscure SPUtility.SendMail Behavior When Manually Passing in Mail Headers

    - by Damon
    There are two ways to send mail in SharePoint: you can either use the mail components from the System.Net namespace, or you can send email using SharePoint's SPUtility.SendMail method.  One of the benefits of the SPUtility.SendMail method is that it uses the mail configuration from SharePoint, so you can manage settings in Central Administration instead of having to go through and modify your web.config file.  SPUtility.SendMail can get the job done, but it's defiantly not as developer friendly as the components from the System.Net namespace.  If you want to CC someone on an email, for example, you do NOT have a nice CC parameter - you have to manually add the CC mail header and pass it into the SPUtility.SendMail method.  I had to do this the other day, and ran into a really obscure issue. If you do NOT pass the headers into the method then SharePoint sends the email using the From Address configured in the Outgoing Mail settings in Central Admin.  If you pass headers into the method, but do not include the from header, then SharePoint sends the mail using the email address of the current user. This can be an issue if your mail server is setup to reject an email from an invalid email address or an email address that is not on your domain.  The way to fix this issue is to always pass in the from header.  If you want to use the configured From address, then you can do the following: SPWebApplication webApp = SPWebApplication.Lookup(new Uri(SPContext.Current.Site.Url)); StringDictionary headers = new StringDictionary(); headers.Add("from", webApp.OutboundMailSenderAddress);

    Read the article

  • Optimal communication pattern to update subscribers

    - by hpc
    What is the optimal way to update the subscriber's local model on changes C on a central model M? ( M + C - M_c) The update can be done by the following methods: Publish the updated model M_c to all subscribers. Drawback: if the model is big in contrast to the change it results in much more data to be communicated. Publish change C to all subscribes. The subscribers will then update their local model in the same way as the server does. Drawback: The client needs to know the business logic to update the model in the same way as the server. It must be assured that the subscribed model stays equal to the central model. Calculate the delta (or patch) of the change (M_c - M = D_c) and transfer the delta. Drawback: This requires that calculating and applying the delta (M + D_c = M_c) is an cheap/easy operation. If a client newly subscribes it must be initialized. This involves sending the current model M. So method 1 is always required. Think of playing chess as a concrete example: Subscribers send moves and want to see the latest chess board state. The server checks validity of the move and applies it to the chess board. The server can then send the updated chessboard (method 1) or just send the move (method 2) or send the delta (method 3): remove piece on field D4, put tower on field D8.

    Read the article

  • Data Synchronization in mobile apps - multiple devices, multiple users

    - by ProgrammerNewbie
    I'm looking into building my first mobile app. One of the core features of the application is that multiple devices/users will have access to the same data -- and all of them will have CRUD rights. I believe the architecture should involve a central server where all the data is stored. The devices will use an API to interact with the server to perform its data operations (e.g. adding a record, editing a record, deleting a record). I imagine a scenario where synchronizing the data will become a problem. Assume the application should work when it is not connected to the Internet, and thus cannot communicate with this central server. So: User A is offline and edits record #100 User B is offline and edits record #100 User C is offline and deletes record #100 User C goes online (presumably, record #100 should get deleted on the server) User A and B goes online, but the records they edited no longer exist All sorts of scenarios similar to the above can come up. How is this generally handled? I plan to use MySQL, but am wondering if it's not appropriate for such a problem.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >