Search Results

Search found 15137 results on 606 pages for 'global state'.

Page 143/606 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • Know More About Oracle Row Lock

    - by Liu Maclean(???)
    ??????Oracle??????????row lock,??ORACLE????????????????????,row lock???????????????????????????????,??Server Process?pin????block buffer????????? ????????,?process A ??update???????? Z?????????, ???????rollback???commit;??Process B??????DML??, ???????rowid???? Z???, ???????????process A????????ITL???,????????cleanout??,????????row???????????commit, ???????Process B????”enq: TX – row lock contention”??????? ????Process B????????????? ?????????Process A???????row,??Process B???????”enq: TX – row lock contention”???? ????????  ????????: SESSION A: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table maclean_lock(t1 int); Table created. SQL> insert into maclean_lock values (1); 1 row created. SQL> commit; Commit complete. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from maclean_lock; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------                                67642                                    1 SQL>  select distinct sid from v$mystat;        SID ----------        142 SQL> select pid,spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));        PID SPID ---------- ------------         17 15636 ??SESSION A ????savepoint ,?update ?????????         SQL>  savepoint NONLOCK; Savepoint created. SQL> select * From v$Lock where sid=142; no rows selected SQL> set linesize 140 pagesize 1400 SQL>  update maclean_lock set t1=t1+2; 1 row updated. SQL> select * From v$Lock where sid=142; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 0000000091FC69F0 0000000091FC6A18        142 TM      55829          0          3          0          6          0 00000000914B4008 00000000914B4040        142 TX     393232        609          6          0          6          0         SQL> select dump(3,16) from dual; DUMP(3,16) -------------------------------------------------------------------------------- Typ=2 Len=2: c1,4 ALTER SYSTEM DUMP DATAFILE 1 BLOCK 67642;  Object id on Block? Y  seg/obj: 0xda16  csc: 0x00.234718  itc: 2  flg: O  typ: 1 - DATA      fsl: 0  fnx: 0x0 ver: 0x01  Itl           Xid                  Uba         Flag  Lck        Scn/Fsc 0x01   0x000a.00f.000001e0  0x00800075.02a6.29  C---    0  scn 0x0000.00234711 0x02   0x0007.018.000001fe  0x0080065c.017a.02  ----    1  fsc 0x0000.00000000 data_block_dump,data header at 0x81d185c =============== tsiz: 0x1fa0 hsiz: 0x14 pbl: 0x081d185c bdba: 0x0041083a      76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f9a avsp=0x1f83 tosp=0x1f83 0xe:pti[0]      nrow=1  offs=0 0x12:pri[0]     offs=0x1f9a block_row_dump: tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x2  cc: 1 col  0: [ 2]  c1 04 end_of_block_dump ?? BLOCK DUMP ???? ??????XID=0x0007.018.000001fe ?transaction?? lb:0x1 ??SESSION B ,?????UPDATE?? ???enq: TX - row lock contention ?? SQL> select distinct sid from v$mystat;        SID ----------        140 SQL> select pid,spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));        PID SPID ---------- ------------         24 15652 SQL> alter system set "_trace_events"='10000-10999:255:24'; System altered.         SQL> update maclean_lock set t1=t1+2; select * From v$Lock where sid=142 or sid=140 order by sid; SESSION C: SQL> select * From v$Lock where sid=142 or sid=140 order by sid; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 0000000091FC6B10 0000000091FC6B38        140 TM      55829          0          3          0         84          0 00000000924F4A58 00000000924F4A78        140 TX     458776        510          0          6         84          0 00000000914B51E8 00000000914B5220        142 TX     458776        510          6          0        312          1 0000000091FC69F0 0000000091FC6A18        142 TM      55829          0          3          0        312          0 ???? SESSION B SID=140 ?SESSION A ?TX ENQUEUE ?X mode?REQUEST SQL> oradebug dump systemstate 266; Statement processed. SESSION B waiter's enqueue lock       SO: 0x924f4a58, type: 5, owner: 0x92bb8dc8, flag: INIT/-/-/0x00       (enqueue) TX-00070018-000001FE    DID: 0001-0018-00000022       lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6       req: X, lock_flag: 0x0, lock: 0x924f4a78, res: 0x925617c0       own: 0x92b76be0, sess: 0x92b76be0, proc: 0x92a737a0, prv: 0x925617e0 TX-00070018-000001FE=> TX 458776 510 SESSION A owner's enqueue lock       SO: 0x914b51e8, type: 40, owner: 0x92b796d0, flag: INIT/-/-/0x00       (trans) flg = 0x1e03, flg2 = 0xc0000, prx = 0x0, ros = 2147483647 bsn = 0xed5 bndsn = 0xee7 spn = 0xef7       efd = 3       file:xct.c lineno:1179       DID: 0001-0011-000000C2       parent xid: 0x0000.000.00000000       env: (scn: 0x0000.00234718  xid: 0x0007.018.000001fe  uba: 0x0080065c.017a.02  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x00 00.00234718 0sch: scn: 0x0000.00000000)       cev: (spc = 7818  arsp = 914e8310  ubk tsn: 1 rdba: 0x0080065c  useg tsn: 1 rdba: 0x00800069             hwm uba: 0x0080065c.017a.02  col uba: 0x00000000.0000.00             num bl: 1 bk list: 0x91435070)             cr opc: 0x0 spc: 7818 uba: 0x0080065c.017a.02       (enqueue) TX-00070018-000001FE    DID: 0001-0011-000000C2       lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6       mode: X, lock_flag: 0x0, lock: 0x914b5220, res: 0x925617c0       own: 0x92b796d0, sess: 0x92b796d0, proc: 0x92a6ffd8, prv: 0x925617d0        xga: 0x8b7c6d40, heap: UGA       Trans IMU st: 2 Pool index 65535, Redo pool 0x914b58d0, Undo pool 0x914b59b8       Redo pool range [0x86de640 0x86de640 0x86e0e40]       Undo pool range [0x86dbe40 0x86dbe40 0x86de640]         ----------------------------------------         SO: 0x91435070, type: 39, owner: 0x914b51e8, flag: -/-/-/0x00         (List of Blocks) next index = 1         index   itli   buffer hint   rdba       savepoint         -----------------------------------------------------------             0      2   0x647f1fc8    0x41083a     0xee7 ?SESSION A? ROLLBACK ?savepoint: SQL> rollback to NONLOCK; Rollback complete. ????savepoint ??update??????? ??UPDATE???????? ROLLBACK: SQL> select * From v$Lock where sid=142 or sid=140; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 00000000924F4A58 00000000924F4A78        140 TX     458776        510          0          6        822          0 0000000091FC6B10 0000000091FC6B38        140 TM      55829          0          3          0        822          0 00000000914B51E8 00000000914B5220        142 TX     458776        510          6          0       1050          1 ???? SESSION A 142 ???SAVEPOINT ???????TM LOCK ????? ROLLBACK TO SAVEPOINT?????SESSION???TX LOCK!!!! ??????SESSION 142???TX ID1=458776 ID2=510, ????ROLLBACK TO SAVEPOINT?????????ABORT TRANSACTION ?? SESSION B  SID=140??  SESSION A ?? , ?????????SESSION B? update???HANG?? ?????????CACHE?????:  Object id on Block? Y  seg/obj: 0xda16  csc: 0x00.2347b7  itc: 2  flg: O  typ: 1 - DATA      fsl: 0  fnx: 0x0 ver: 0x01  Itl           Xid                  Uba         Flag  Lck        Scn/Fsc 0x01   0x000a.00f.000001e0  0x00800075.02a6.29  C---    0  scn 0x0000.00234711 0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000 data_block_dump,data header at 0x745d85c =============== tsiz: 0x1fa0 hsiz: 0x14 pbl: 0x0745d85c bdba: 0x0041083a      76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f9a avsp=0x1f83 tosp=0x1f83 0xe:pti[0]      nrow=1  offs=0 0x12:pri[0]     offs=0x1f9a block_row_dump: tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x0  cc: 1 col  0: [ 2]  c1 02 end_of_block_dump ???? ITL=0x02? ?????????,col  0: [ 2]  c1 02 ????????? ?????????SESSION D ,??????row lock?? ?UPDATE???????? SESSION D: SQL> update maclean_lock set t1=t1+2; 1 row updated. SQL> rollback; Rollback complete. ??SESSION B ??????????? ?????ORACLE????????, ??????????? TX lock?? row lock , ????????2??? row lock?????????, ?TX lock????????ENQUEUE LOCK???? ?????????PROCESS K?DML???????????????????????,??????????TX LOCK, ????PROCESS Z?????????????????????????ROW LOCK????????, ???????OLTP?????????????????????? ??ROW LOCK?Release ??????TX?ENQUEUE LOCK,?????????Process J ????????????, Process K??????????? ,Process K?????????,???row piece?lb??0x0 ,?????ITL, Process Z???ITL???????Process J????XID,?????Process J?????TX lock, PROCESS K ???TX resource?Enqueue Waiter Linked List?????X mode(exclusive)?enqueue lock? ???Process J??TX lock?,Process J?????TX resource?Enqueue Waiter Linked List ???Process K??????,??POST?????Process K? TX lock??????, ???????row lock???????,????????? ?????????? ?????: SESSION A ???PID =17 ?????????????????? SESSION B ???PID =24 ??????? "_trace_events"='10000-10999:255:24';  KST trace ??????? Server Process??? SESSION A PID=17  ?? acqure?SX mode???TM Lock ,?? ????Transaction?????UNDO SEGMENT 7,???XID 7.24.510, ?acquire ?X mode? TX-00070018-000001fe ? ?????? 00070018-000001fe ???? 7- 24 - 510? XID ? 781F4B8A:007A569C    17   142 10704  83 ksqgtl: acquire TM-0000da15-00000000 mode=SX flags=GLOBAL|XACT why="contention" 781F4B92:007A569D    17   142 10704  19 ksqgtl: SUCCESS 781F4BB3:007A569E    17   142 10812   2 0x000000000041083A 0x0000000000000000 0x0000000000234717 781F4BBA:007A569F    17   142 10812   3 0x0000000000000000 0x0000000000000000 0x0000000000000000 781F4BC0:007A56A0    17   142 10812   4 0x0000000000000000 0x0000000000000000 0x0000000000000000 781F4BD3:007A56A1    17   142 10812   5 0x000000000041083A 0x0000000000000000 0x0000000000000000 781F4BFE:007A56A2    17   142 10811   1 0x000000000041083A 0x0000000000000000 0x0000000000234711 0x0000000000000002 781F4C06:007A56A3    17   142 10811   2 0x000000000041083A 0x0000000000000000 0x0000000000234718 0x00007FA074EDA560 781F4C26:007A56A4    17   142 10813   1 ktubnd: Bind usn 7 nax 1 nbx 0 lng 0 par 0 781F4C43:007A56A5    17   142 10813   2 ktubnd: Txn Bound xid: 7.24.510 781F4C4A:007A56A6    17   142 10704  83 ksqgtl: acquire TX-00070018-000001fe mode=X flags=GLOBAL|XACT why="contention" 781F4C51:007A56A7    17   142 10704  19 ksqgtl: SUCCESS ?????????? ???????? 781F4CBF:007A56A8    17   142 10005   1 KSL WAIT BEG [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 781F4CCC:007A56A9    17   142 10005   2 KSL WAIT END [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 time=13 781F4CDE:007A56AA    17   142 10005   1 KSL WAIT BEG [SQL*Net message from client] 1650815232/0x62657100 1/0x1 0/0x0 786BD85D:007A57E0    17   142 10005   2 KSL WAIT END [SQL*Net message from client] 1650815232/0x62657100 1/0x1 0/0x0 time=5016447 786BD966:007A57E1    17   142 10005   1 KSL WAIT BEG [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 786BD96E:007A57E2    17   142 10005   2 KSL WAIT END [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 time=8 SESSION B ???PID =24  ,??????? SX mode? TM lock,??row lock? acquire X mode?TX-00070018-000001fe ksqgtl: acquire TM-0000da15-00000000 mode=SX flags=GLOBAL|XACT why="contention" ksqgtl: SUCCESS 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000000000001 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000008A63780 0x0000000000000001 0x0000000000800861 0x0000000000000241 0x0000000000000001 0x000000000041083A 0x0000000000000001 0x0000000000000001 0x000000000041083A 0x0000000000000000 0x00000000002354F9 0x0000000000000002 ksqgtl: acquire TX-00070018-000001fe mode=X flags=GLOBAL|LONG why="row lock contention" C4048EBD:007F52B6    24   140 10005   2 KSL WAIT END [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe time=2929879 C4048ED4:007F52B7    24   140 10005   1 KSL WAIT BEG [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe C43146CA:007F535E    24   140 10005   2 KSL WAIT END [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe time=2930676 ????????? ,PID=24 ??????ksqcmi???????? deadlock C43146D9:007F535F    24   140 10704 134 ksqcmi: performing local deadlock detection on TX-00070018-000001fe C43146F8:007F5360    24   140 10704 150 ksqcmi: deadlock not detected on TX-00070018-000001fe ?? ??? PID 17 ??ROLLBACK ???? ,????????: PID 17 ROLLBACK; D7A495BB:007F9D3E    17   142 10005   4 KSL POST SENT postee=24 loc='ksqrcl' id1=0 id2=0 name=   type=0 D7A495D8:007F9D3F    17   142 10444  12 ABORT TRANSACTION - xid: 0x0007.018.000001fe ??  PID 17 ??? TX resource?Enqueue Waiter linked List ???PID 24???,????KSL POST SENT ?? PID 24, ???ksqrcl???ENQUEUE LOCK ?PID 24??????KSL POST (KSL POST RCVD poster=17), ?ksqgtl???? TX-00070018-000001fe ?? ksqrcl??, ??PID 24???????? TX lock?USN ,??????? USN 3 XID 3.11.582 ,???acquire TX-0003000b-00000246 D7A49616:007F9D41    24   140 10005   3 KSL POST RCVD poster=17 loc='ksqrcl' id1=0 id2=0 name=   type=0 fac#=0 facpost=1 D7A4961C:007F9D42    24   140 10704  19 ksqgtl: SUCCESS D7A4967D:007F9D43    24   140 10704 117 ksqrcl: release TX-00070018-000001fe mode=X D7A496A5:007F9D44    24   140 10813   1 ktubnd: Bind usn 3 nax 1 nbx 0 lng 0 par 0 D7A496C2:007F9D45    24   140 10813   2 ktubnd: Txn Bound xid: 3.11.582 D7A496C7:007F9D46    24   140 10704  83 ksqgtl: acquire TX-0003000b-00000246 mode=X flags=GLOBAL|XACT why="contention" D7A496E4:007F9D47    24   140 10704  19 ksqgtl: SUCCESS ROW LOCK?Release ??????TX?ENQUEUE LOCK,?????????Process J ????????????, Process K??????????? ,Process K?????????,???row piece?lb??0×0 ,?????ITL,Process Z???ITL???????Process J????XID,?????Process J?????TX lock,PROCESS K ???TX resource?Enqueue Waiter Linked List?????X mode(exclusive)?enqueue lock? ???Process J??TX lock?,Process J?????TX resource?Enqueue Waiter Linked List ???Process K??????,??POST?????Process K? TX lock??????,???????row lock???????,?????????

    Read the article

  • Top 50 ASP.Net Interview Questions & Answers

    - by Samir R. Bhogayta
    1. What is ASP.Net? It is a framework developed by Microsoft on which we can develop new generation web sites using web forms(aspx), MVC, HTML, Javascript, CSS etc. Its successor of Microsoft Active Server Pages(ASP). Currently there is ASP.NET 4.0, which is used to develop web sites. There are various page extensions provided by Microsoft that are being used for web site development. Eg: aspx, asmx, ascx, ashx, cs, vb, html, xml etc. 2. What’s the use of Response.Output.Write()? We can write formatted output  using Response.Output.Write(). 3. In which event of page cycle is the ViewState available?   After the Init() and before the Page_Load(). 4. What is the difference between Server.Transfer and Response.Redirect?   In Server.Transfer page processing transfers from one page to the other page without making a round-trip back to the client’s browser.  This provides a faster response with a little less overhead on the server.  The clients url history list or current url Server does not update in case of Server.Transfer. Response.Redirect is used to redirect the user’s browser to another page or site.  It performs trip back to the client where the client’s browser is redirected to the new page.  The user’s browser history list is updated to reflect the new address. 5. From which base class all Web Forms are inherited? Page class.  6. What are the different validators in ASP.NET? Required field Validator Range  Validator Compare Validator Custom Validator Regular expression Validator Summary Validator 7. Which validator control you use if you need to make sure the values in two different controls matched? Compare Validator control. 8. What is ViewState? ViewState is used to retain the state of server-side objects between page post backs. 9. Where the viewstate is stored after the page postback? ViewState is stored in a hidden field on the page at client side.  ViewState is transported to the client and back to the server, and is not stored on the server or any other external source. 10. How long the items in ViewState exists? They exist for the life of the current page. 11. What are the different Session state management options available in ASP.NET? In-Process Out-of-Process. In-Process stores the session in memory on the web server. Out-of-Process Session state management stores data in an external server.  The external server may be either a SQL Server or a State Server.  All objects stored in session are required to be serializable for Out-of-Process state management. 12. How you can add an event handler?  Using the Attributes property of server side control. e.g. [csharp] btnSubmit.Attributes.Add(“onMouseOver”,”JavascriptCode();”) [/csharp] 13. What is caching? Caching is a technique used to increase performance by keeping frequently accessed data or files in memory. The request for a cached file/data will be accessed from cache instead of actual location of that file. 14. What are the different types of caching? ASP.NET has 3 kinds of caching : Output Caching, Fragment Caching, Data Caching. 15. Which type if caching will be used if we want to cache the portion of a page instead of whole page? Fragment Caching: It caches the portion of the page generated by the request. For that, we can create user controls with the below code: [xml] <%@ OutputCache Duration=”120? VaryByParam=”CategoryID;SelectedID”%> [/xml] 16. List the events in page life cycle.   1) Page_PreInit 2) Page_Init 3) Page_InitComplete 4) Page_PreLoad 5) Page_Load 6) Page_LoadComplete 7) Page_PreRender 8)Render 17. Can we have a web application running without web.Config file?   Yes 18. Is it possible to create web application with both webforms and mvc? Yes. We have to include below mvc assembly references in the web forms application to create hybrid application. [csharp] System.Web.Mvc System.Web.Razor System.ComponentModel.DataAnnotations [/csharp] 19. Can we add code files of different languages in App_Code folder?   No. The code files must be in same language to be kept in App_code folder. 20. What is Protected Configuration? It is a feature used to secure connection string information. 21. Write code to send e-mail from an ASP.NET application? [csharp] MailMessage mailMess = new MailMessage (); mailMess.From = “[email protected]”; mailMess.To = “[email protected]”; mailMess.Subject = “Test email”; mailMess.Body = “Hi This is a test mail.”; SmtpMail.SmtpServer = “localhost”; SmtpMail.Send (mailMess); [/csharp] MailMessage and SmtpMail are classes defined System.Web.Mail namespace.  22. How can we prevent browser from caching an ASPX page?   We can SetNoStore on HttpCachePolicy object exposed by the Response object’s Cache property: [csharp] Response.Cache.SetNoStore (); Response.Write (DateTime.Now.ToLongTimeString ()); [/csharp] 23. What is the good practice to implement validations in aspx page? Client-side validation is the best way to validate data of a web page. It reduces the network traffic and saves server resources. 24. What are the event handlers that we can have in Global.asax file? Application Events: Application_Start , Application_End, Application_AcquireRequestState, Application_AuthenticateRequest, Application_AuthorizeRequest, Application_BeginRequest, Application_Disposed,  Application_EndRequest, Application_Error, Application_PostRequestHandlerExecute, Application_PreRequestHandlerExecute, Application_PreSendRequestContent, Application_PreSendRequestHeaders, Application_ReleaseRequestState, Application_ResolveRequestCache, Application_UpdateRequestCache Session Events: Session_Start,Session_End 25. Which protocol is used to call a Web service? HTTP Protocol 26. Can we have multiple web config files for an asp.net application? Yes. 27. What is the difference between web config and machine config? Web config file is specific to a web application where as machine config is specific to a machine or server. There can be multiple web config files into an application where as we can have only one machine config file on a server. 28.  Explain role based security ?   Role Based Security used to implement security based on roles assigned to user groups in the organization. Then we can allow or deny users based on their role in the organization. Windows defines several built-in groups, including Administrators, Users, and Guests. [xml] <AUTHORIZATION>< authorization > < allow roles=”Domain_Name\Administrators” / >   < !– Allow Administrators in domain. — > < deny users=”*”  / >                            < !– Deny anyone else. — > < /authorization > [/xml] 29. What is Cross Page Posting? When we click submit button on a web page, the page post the data to the same page. The technique in which we post the data to different pages is called Cross Page posting. This can be achieved by setting POSTBACKURL property of  the button that causes the postback. Findcontrol method of PreviousPage can be used to get the posted values on the page to which the page has been posted. 30. How can we apply Themes to an asp.net application? We can specify the theme in web.config file. Below is the code example to apply theme: [xml] <configuration> <system.web> <pages theme=”Windows7? /> </system.web> </configuration> [/xml] 31: What is RedirectPermanent in ASP.Net?   RedirectPermanent Performs a permanent redirection from the requested URL to the specified URL. Once the redirection is done, it also returns 301 Moved Permanently responses. 32: What is MVC? MVC is a framework used to create web applications. The web application base builds on  Model-View-Controller pattern which separates the application logic from UI, and the input and events from the user will be controlled by the Controller. 33. Explain the working of passport authentication. First of all it checks passport authentication cookie. If the cookie is not available then the application redirects the user to Passport Sign on page. Passport service authenticates the user details on sign on page and if valid then stores the authenticated cookie on client machine and then redirect the user to requested page 34. What are the advantages of Passport authentication? All the websites can be accessed using single login credentials. So no need to remember login credentials for each web site. Users can maintain his/ her information in a single location. 35. What are the asp.net Security Controls? <asp:Login>: Provides a standard login capability that allows the users to enter their credentials <asp:LoginName>: Allows you to display the name of the logged-in user <asp:LoginStatus>: Displays whether the user is authenticated or not <asp:LoginView>: Provides various login views depending on the selected template <asp:PasswordRecovery>:  email the users their lost password 36: How do you register JavaScript for webcontrols ? We can register javascript for controls using <CONTROL -name>Attribtues.Add(scriptname,scripttext) method. 37. In which event are the controls fully loaded? Page load event. 38: what is boxing and unboxing? Boxing is assigning a value type to reference type variable. Unboxing is reverse of boxing ie. Assigning reference type variable to value type variable. 39. Differentiate strong typing and weak typing In strong typing, the data types of variable are checked at compile time. On the other hand, in case of weak typing the variable data types are checked at runtime. In case of strong typing, there is no chance of compilation error. Scripts use weak typing and hence issues arises at runtime. 40. How we can force all the validation controls to run? The Page.Validate() method is used to force all the validation controls to run and to perform validation. 41. List all templates of the Repeater control. ItemTemplate AlternatingltemTemplate SeparatorTemplate HeaderTemplate FooterTemplate 42. List the major built-in objects in ASP.NET?  Application Request Response Server Session Context Trace 43. What is the appSettings Section in the web.config file? The appSettings block in web config file sets the user-defined values for the whole application. For example, in the following code snippet, the specified ConnectionString section is used throughout the project for database connection: [csharp] <em><configuration> <appSettings> <add key=”ConnectionString” value=”server=local; pwd=password; database=default” /> </appSettings></em> [/csharp] 44.      Which data type does the RangeValidator control support? The data types supported by the RangeValidator control are Integer, Double, String, Currency, and Date. 45. What is the difference between an HtmlInputCheckBox control and anHtmlInputRadioButton control? In HtmlInputCheckBoxcontrol, multiple item selection is possible whereas in HtmlInputRadioButton controls, we can select only single item from the group of items. 46. Which namespaces are necessary to create a localized application? System.Globalization System.Resources 47. What are the different types of cookies in ASP.NET? Session Cookie – Resides on the client machine for a single session until the user does not log out. Persistent Cookie – Resides on a user’s machine for a period specified for its expiry, such as 10 days, one month, and never. 48. What is the file extension of web service? Web services have file extension .asmx.. 49. What are the components of ADO.NET? The components of ADO.Net are Dataset, Data Reader, Data Adaptor, Command, connection. 50. What is the difference between ExecuteScalar and ExecuteNonQuery? ExecuteScalar returns output value where as ExecuteNonQuery does not return any value but the number of rows affected by the query. ExecuteScalar used for fetching a single value and ExecuteNonQuery used to execute Insert and Update statements.

    Read the article

  • Autounattend.xml not being recognized in VirtualBox

    - by beagle
    I am working my way through the steps on this page to prepare an unattended installation of Windows 7 Enterprise x64 for purposes of a college assignment which simply requires the process to be carried out and documented. Both the "technician" and "reference" computers are virtual machines created in VirtualBox 4.3.12, as will be the destination computer. I seem to have successfully completed Step 1, building an Autounattend.xml answer file using Windows System Image Manager, in as far as the answer file validates successfully. The problem arises when I try to install Windows on the reference machine from the DVD image in conjunction with the Autounattend file on a USB drive. I have tried a couple of different USB devices, and the devices themselves seem to be recognized, but the answer file does not, as instead of taking the configuration settings from the file the user interface appears as in a manual installation. Has anyone come across this problem or a solution? The xml created by Windows SIM is below for reference in case the problem is with the file itself. <?xml version="1.0" encoding="utf-8"?> <unattend xmlns="urn:schemas-microsoft-com:unattend"> <settings pass="oobeSystem"> <component name="Microsoft-Windows-Deployment" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Reseal> <Mode>Audit</Mode> </Reseal> </component> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <OOBE> <HideEULAPage>true</HideEULAPage> <ProtectYourPC>3</ProtectYourPC> </OOBE> </component> </settings> <settings pass="windowsPE"> <component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SetupUILanguage> <UILanguage>en-IE</UILanguage> </SetupUILanguage> <InputLocale>en-IE</InputLocale> <SystemLocale>en-IE</SystemLocale> <UILanguage>en-IE</UILanguage> <UserLocale>en-IE</UserLocale> </component> <component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <DiskConfiguration> <Disk wcm:action="add"> <CreatePartitions> <CreatePartition wcm:action="add"> <Order>1</Order> <Size>300</Size> <Type>Primary</Type> </CreatePartition> <CreatePartition wcm:action="add"> <Order>2</Order> <Extend>true</Extend> <Type>Primary</Type> </CreatePartition> </CreatePartitions> <ModifyPartitions> <ModifyPartition wcm:action="add"> <Active>true</Active> <Format>NTFS</Format> <Label>System</Label> <Order>1</Order> <PartitionID>1</PartitionID> </ModifyPartition> <ModifyPartition wcm:action="add"> <Format>NTFS</Format> <Label>Windows</Label> <Order>2</Order> <PartitionID>2</PartitionID> </ModifyPartition> </ModifyPartitions> <DiskID>0</DiskID> <WillWipeDisk>true</WillWipeDisk> </Disk> <WillShowUI>OnError</WillShowUI> </DiskConfiguration> <ImageInstall> <OSImage> <InstallTo> <DiskID>0</DiskID> <PartitionID>2</PartitionID> </InstallTo> <InstallToAvailablePartition>false</InstallToAvailablePartition> <WillShowUI>OnError</WillShowUI> </OSImage> </ImageInstall> <UserData> <ProductKey> <WillShowUI>OnError</WillShowUI> </ProductKey> <AcceptEula>true</AcceptEula> </UserData> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-IE-InternetExplorer" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <Home_Page>http://www.example.com</Home_Page> </component> </settings> <cpi:offlineImage cpi:source="wim://technician/users/user/desktop/install.wim#Windows 7 ENTERPRISE" xmlns:cpi="urn:schemas-microsoft-com:cpi" />

    Read the article

  • Nagios plugin script not working as expected

    - by Linker3000
    I have modified an off-the-shelf Nagios plugin perl script to (in theory) return a one or zero according to the existence, or not, of a file on a remote linux server. The script runs a remote ssh session and logs in as the nagios user. The remote linux servers have private keys setup for that user, and on the bash command line the script works as expected, but when run as a plugin it always returns '1' (true) even if the file does not exist. Some help with the logic or a comment on why things are not working as expected within Nagios would be appreciated. I'd prefer to use this ssh login method rather than having to install nrpe on all the linux servers. To run from a command line (assuming remote server has a user called nagios with a valid private key): ./check_reboot_required -e ssh -H remote-servers-ip-addr -p 'filename-to-check' -v Ta. #! /usr/bin/perl -w # # # License Information: # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # ############################################################################ use POSIX; use strict; use Getopt::Long; use lib "/usr/lib/nagios/plugins" ; use vars qw($host $opt_V $opt_h $opt_v $verbose $PROGNAME $pattern $opt_p $mmin $opt_e $opt_t $opt_H $status $state $msg $msg_q $MAILQ $SHELL $device $used $avail $percent $fs $blocks $CMD $RMTOS); use utils qw(%ERRORS &print_revision &support &usage ); sub print_help (); sub print_usage (); sub process_arguments (); $ENV{'PATH'}=''; $ENV{'BASH_ENV'}=''; $ENV{'ENV'}=''; $PROGNAME = "check_reboot_required"; Getopt::Long::Configure('bundling'); $status = process_arguments(); if ($status){ print "ERROR: processing arguments\n"; exit $ERRORS{'UNKNOWN'}; } $SIG{'ALRM'} = sub { print ("ERROR: timed out waiting for $CMD on $host\n"); exit $ERRORS{'WARNING'}; }; $host = $opt_H; $pattern = $opt_p; print "Pattern >" . $pattern . "< " if $verbose; alarm($opt_t); #$CMD = "/usr/bin/find " . $pattern . " -type f 2>/dev/null| /usr/bin/wc -l"; $CMD = "[ -f " . $pattern . " ] && echo 1 || echo 0"; alarm($opt_t); ## get cmd output from remote system if (! open (OUTPUT, "$SHELL $host $CMD|" ) ) { print "ERROR: could not open $CMD on $host\n"; exit $ERRORS{'UNKNOWN'}; } my $perfdata = ""; my $state = "3"; my $msg = "Indeterminate result"; # only first line is relevant in this iteration. while (<OUTPUT>) { my $result = chomp($_); $msg = $result; print "Shell returned >" . $result . "< length is " . length($result) . " " if $verbose; if ( $result == 1 ) { $msg = "Reboot required (NB: Result still not accurate)" . $result ; $state = $ERRORS{'WARNING'}; last; } elsif ( $result == 0 ) { $msg = "No reboot required (NB: Result still not accurate) " . $result ; $state = $ERRORS{'OK'}; last; } else { $msg = "Output received, but it was neither a 1 nor a 0" ; last; } } close (OUTPUT); print "$msg | $perfdata\n"; exit $state; ##################################### #### subs sub process_arguments(){ GetOptions ("V" => \$opt_V, "version" => \$opt_V, "v" => \$opt_v, "verbose" => \$opt_v, "h" => \$opt_h, "help" => \$opt_h, "e=s" => \$opt_e, "shell=s" => \$opt_e, "p=s" => \$opt_p, "pattern=s" => \$opt_p, "t=i" => \$opt_t, "timeout=i" => \$opt_t, "H=s" => \$opt_H, "hostname=s" => \$opt_H ); if ($opt_V) { print_revision($PROGNAME,'$Revision: 1.0 $ '); exit $ERRORS{'OK'}; } if ($opt_h) { print_help(); exit $ERRORS{'OK'}; } if (defined $opt_v ){ $verbose = $opt_v; } if (defined $opt_e ){ if ( $opt_e eq "ssh" ) { if (-x "/usr/local/bin/ssh") { $SHELL = "/usr/local/bin/ssh"; } elsif ( -x "/usr/bin/ssh" ) { $SHELL = "/usr/bin/ssh"; } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } } elsif ( $opt_e eq "rsh" ) { $SHELL = "/usr/bin/rsh"; } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } } else { print_usage(); exit $ERRORS{'UNKNOWN'}; } unless (defined $opt_t) { $opt_t = $utils::TIMEOUT ; # default timeout } unless (defined $opt_H) { print_usage(); exit $ERRORS{'UNKNOWN'}; } return $ERRORS{'OK'}; } sub print_usage () { print "Usage: $PROGNAME -e <shell> -H <hostname> -p <directory/file pattern> [-t <timeout>] [-v verbose]\n"; } sub print_help () { print_revision($PROGNAME,'$Revision: 0.1 $'); print "\n"; print_usage(); print "\n"; print " Checks for the presence of a 'reboot-required' file on a remote host via SSH or RSH\n"; print "-e (--shell) = ssh or rsh (required)\n"; print "-H (--hostname) = remote server name (required)"; print "-p (--pattern) = File pattern for find command (default = /var/run/reboot-required)\n"; print "-t (--timeout) = Plugin timeout in seconds (default = $utils::TIMEOUT)\n"; print "-h (--help)\n"; print "-V (--version)\n"; print "-v (--verbose) = debugging output\n"; print "\n\n"; support(); }

    Read the article

  • Why can't I reinstall MySQL?

    - by Johannes Nielsen
    I've been looking all around the Internet for an answer but didn't find anything. I hope you can help me now. I have a server with MySQL. From one day to another, MySQL didn't let me enter with my root password anymore (accsess denied for user 'root'@'localhost' using password: 'YES'). So I tried two ways to reset the password: No.1: I typed: shell> /etc/init.d/mysqld stop To stop MySQL. Then I restarted it skipping the grant-tables: shell> mysqld_safe --skip-grant-tables So I was able to log in as root and change the password using: mysql> UPDATE mysql.user SET Password = PASSWORD('MyNewPassword') WHERE User = 'root'; FLUSH PRIVILEGES; I restarted MySQL and tried to log in as root with my new password - didn't work. So I tried the solution that's described here: http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html (I don't want to post it here because this post is already pretty long). Didn't work either. Actually it made it worse, because since that day, every time I try to start MySQL, it doesn't even ask me for my password, but I get: shell> ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Well, I've looked up what it means and found that my mysqld.sock is missing. I tried to create it using touch but MySQL can't start with that socket. Now I'm trying to reinstall MySQL but everytime I type in shell> apt-get --purge remove mysql-server mysql-common mysql-client In that or any other order or every one of those three alone, I get: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> psa-firewall : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> psa-spamassassin : Depends: plesk-core (>= 11.0.9) but it is not installable shell> psa-vpn : Depends: plesk-core (>= 11.0.9) but it is not installable shell> Depends: plesk-base (>= 11.0.9) but it is not installable shell> Depends: mysql-server but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I said to my self "let's just remove those files with depenencies, too" (that psa-stuff since plesk is virtual and can't be uninstalled)... Guess what happened: shell> Reading package lists... Done shell> Building dependency tree shell> Reading state information... Done shell> Package mysql-client is not installed, so not removed shell> Package mysql-server is not installed, so not removed shell> You might want to run 'apt-get -f install' to correct these: shell> The following packages have unmet dependencies: shell> libmysqlclient18 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> libmysqlclient18:i386 : Depends: mysql-common:i386 (>= 5.5.28-0ubuntu0.12.04.2) shell> mysql-client-5.5 : Depends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> mysql-server-5.5 : PreDepends: mysql-common (>= 5.5.28-0ubuntu0.12.04.2) but it is not going to be installed shell> E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Of course I tried apt-get -f install, too many times even. What am I doing wrong? No matter, which other packages I include into apt-get --purge remove, I always get new dependencies. Do I have to delete every MySQL-related directory and file manually? Hope there's someone out there who can help me! Cheers! EDIT: After trying apt-get purge mysql-server mysql-common mysql-client libmysqlclient18 libmysqlclient18:i386 mysql-client-5.5 mysql-server-5.5 psa-firewall psa-spamassassin psa-vpn Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libdbd-mysql-perl : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libmyodbc : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libmysqlclient18:i386 (>= 5.5.13-1) but it is not going to be installed php5-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed ruby-mysql : Depends: libmysqlclient18 (>= 5.5.13-1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to remove all these and got: Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed You might want to run 'apt-get -f install' to correct these:qlclient18:i386 mysql The following packages have unmet dependencies: libmysql-ruby1.8 : Depends: ruby-mysql but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And actually I think removing that file, too solved my problem :-S Next time I'll try everything before asking :D Thank you Eric for keeping me couraged to just go on removing :D

    Read the article

  • Overwriting TFS Web Services

    - by javarg
    In this blog I will share a technique I used to intercept TFS Web Services calls. This technique is a very invasive one and requires you to overwrite default TFS Web Services behavior. I only recommend taking such an approach when other means of TFS extensibility fail to provide the same functionality (this is not a supported TFS extensibility point). For instance, intercepting and aborting a Work Item change operation could be implemented using this approach (consider TFS Subscribers functionality before taking this approach, check Martin’s post about subscribers). So let’s get started. The technique consists in versioning TFS Web Services .asmx service classes. If you look into TFS’s ASMX services you will notice that versioning is supported by creating a class hierarchy between different product versions. For instance, let’s take the Work Item management service .asmx. Check the following .asmx file located at: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\_tfs_resources\WorkItemTracking\v3.0\ClientService.asmx The .asmx references the class Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3: <%-- Copyright (c) Microsoft Corporation. All rights reserved. --%> <%@ webservice language="C#" Class="Microsoft.TeamFoundation.WorkItemTracking.Server.ClientService3" %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The inheritance hierarchy for this service class follows: Note the naming convention used for service versioning (ClientService3, ClientService2, ClientService). We will need to overwrite the latest service version provided by the product (in this case ClientService3 for TFS 2010). The following example intercepts and analyzes WorkItem fields. Suppose we need to validate state changes with more advanced logic other than the provided validations/constraints of the process template. Important: Backup the original .asmx file and create one of your own. Create a Visual Studio Web App Project and include a new ASMX Web Service in the project Add the following references to the project (check the folder %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\): Microsoft.TeamFoundation.Framework.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.Server.dll Microsoft.TeamFoundation.WorkItemTracking.Client.QueryLanguage.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataAccessLayer.dll Microsoft.TeamFoundation.WorkItemTracking.Server.DataServices.dll Replace the default service implementation with the something similar to the following code: Code Snippet /// <summary> /// Inherit from ClientService3 to overwrite default Implementation /// </summary> [WebService(Namespace = "http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/ClientServices/03", Description = "Custom Team Foundation WorkItemTracking ClientService Web Service")] public class CustomTfsClientService : ClientService3 {     [WebMethod, SoapHeader("requestHeader", Direction = SoapHeaderDirection.In)]     public override bool BulkUpdate(         XmlElement package,         out XmlElement result,         MetadataTableHaveEntry[] metadataHave,         out string dbStamp,         out Payload metadata)     {         var xe = XElement.Parse(package.OuterXml);         // We only intercept WorkItems Updates (we can easily extend this sample to capture any operation).         var wit = xe.Element("UpdateWorkItem");         if (wit != null)         {             if (wit.Attribute("WorkItemID") != null)             {                 int witId = (int)wit.Attribute("WorkItemID");                 // With this Id. I can query TFS for more detailed information, using TFS Client API (assuming the WIT already exists).                 var stateChanged =                     wit.Element("Columns").Elements("Column").FirstOrDefault(c => (string)c.Attribute("Column") == "System.State");                 if (stateChanged != null)                 {                     var newStateName = stateChanged.Element("Value").Value;                     if (newStateName == "Resolved")                     {                         throw new Exception("Cannot change state to Resolved!");                     }                 }             }         }         // Finally, we call base method implementation         return base.BulkUpdate(package, out result, metadataHave, out dbStamp, out metadata);     } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 4. Build your solution and overwrite the original .asmx with the new implementation referencing our new service version (don’t forget to backup it up first). 5. Copy your project’s .dll into the following path: %Program Files%\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin 6. Try saving a WorkItem into the Resolved state. Enjoy!

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • PayPal IPN - having trouble accessing session data?

    - by Martin Bean
    Hello, all. I'm having issues with PayPal IPN integration where it seems I cannot get my solution to read session variables. Basically, in my shop module script, I store the customer's details as provided by PayPal to an orders table. However, I also wish to save products ordered in a transaction to a separate table linked by the order ID. However, it's the second part of the script that's not working, where I loop through the products in the session and then save them to the orders_products table. Is there a reason why the session data not being read? The code within shop.php is as follows: if ($paypal->validate_ipn()) { $name = $paypal->ipn_data['address_name']; $street_1 = $paypal->ipn_data['address_street']; $street_2 = ""; $city = $paypal->ipn_data['address_city']; $state = $paypal->ipn_data['address_state']; $zip = $paypal->ipn_data['address_zip']; $country = $paypal->ipn_data['address_country']; $txn_id = $paypal->ipn_data['txn_id']; $sql = "INSERT INTO orders (name, street_1, street_2, city, state, zip, country, txn_id) VALUES (:name, :street_1, :street_2, :city, :state, :zip, :country, :txn_id)"; $smt = $this->pdo->prepare($sql); $smt->bindParam(':name', $name, PDO::PARAM_STR); $smt->bindParam(':street_1', $street_1, PDO::PARAM_STR); $smt->bindParam(':street_2', $street_2, PDO::PARAM_STR); $smt->bindParam(':city', $city, PDO::PARAM_STR); $smt->bindParam(':state', $state, PDO::PARAM_STR); $smt->bindParam(':zip', $zip, PDO::PARAM_STR); $smt->bindParam(':country', $country, PDO::PARAM_STR); $smt->bindParam(':txn_id', $txn_id, PDO::PARAM_INT); $smt->execute(); // save products to orders relationship $order_id = $this->pdo->lastInsertId(); // $cart = $this->session->get('cart'); $cart = $this->session->get('cart'); foreach ($cart as $product_id => $item) { $quantity = $item['quantity']; $sql = "INSERT INTO orders_products (order_id, product_id, quantity) VALUES ('$order_id', '$product_id', '$quantity')"; $res = $this->pdo->query($sql); } $this->session->del('cart'); mail('[email protected]', 'IPN result', 'IPN was successful on wrestling-wear.com'); } else { mail('[email protected]', 'IPN result', 'IPN failed on wrestling-wear.com'); } And I'm using the PayPal IPN class for PHP as found here: http://www.micahcarrick.com/04-19-2005/php-paypal-ipn-integration-class.html, but the contents of the validate_ipn() method is as follows: public function validate_ipn() { $url_parsed = parse_url($this->paypal_url); $post_string = ''; foreach ($_POST as $field => $value) { $this->ipn_data[$field] = $value; $post_string.= $field.'='.urlencode(stripslashes($value)).'&'; } $post_string.= "cmd=_notify-validate"; // append IPN command // open the connection to PayPal $fp = fsockopen($url_parsed[host], "80", $err_num, $err_str, 30); if (!$fp) { // could not open the connection. If logging is on, the error message will be in the log $this->last_error = "fsockopen error no. $errnum: $errstr"; $this->log_ipn_results(false); return false; } else { // post the data back to PayPal fputs($fp, "POST $url_parsed[path] HTTP/1.1\r\n"); fputs($fp, "Host: $url_parsed[host]\r\n"); fputs($fp, "Content-type: application/x-www-form-urlencoded\r\n"); fputs($fp, "Content-length: ".strlen($post_string)."\r\n"); fputs($fp, "Connection: close\r\n\r\n"); fputs($fp, $post_string . "\r\n\r\n"); // loop through the response from the server and append to variable while (!feof($fp)) { $this->ipn_response.= fgets($fp, 1024); } fclose($fp); // close connection } if (eregi("VERIFIED", $this->ipn_response)) { // valid IPN transaction $this->log_ipn_results(true); return true; } else { // invalid IPN transaction; check the log for details $this->last_error = 'IPN Validation Failed.'; $this->log_ipn_results(false); return false; } }

    Read the article

  • 6 Ways to Free Up Hard Drive Space Used by Windows System Files

    - by Chris Hoffman
    We’ve previously covered the standard ways to free up space on Windows. But if you have a small solid-state drive and really want more hard space, there are geekier ways to reclaim hard drive space. Not all of these tips are recommended — in fact, if you have more than enough hard drive space, following these tips may actually be a bad idea. There’s a tradeoff to changing all of these settings. Erase Windows Update Uninstall Files Windows allows you to uninstall patches you install from Windows Update. This is helpful if an update ever causes a problem — but how often do you need to uninstall an update, anyway? And will you really ever need to uninstall updates you’ve installed several years ago? These uninstall files are probably just wasting space on your hard drive. A recent update released for Windows 7 allows you to erase Windows Update files from the Windows Disk Cleanup tool. Open Disk Cleanup, click Clean up system files, check the Windows Update Cleanup option, and click OK. If you don’t see this option, run Windows Update and install the available updates. Remove the Recovery Partition Windows computers generally come with recovery partitions that allow you to reset your computer back to its factory default state without juggling discs. The recovery partition allows you to reinstall Windows or use the Refresh and Reset your PC features. These partitions take up a lot of space as they need to contain a complete system image. On Microsoft’s Surface Pro, the recovery partition takes up about 8-10 GB. On other computers, it may be even larger as it needs to contain all the bloatware the manufacturer included. Windows 8 makes it easy to copy the recovery partition to removable media and remove it from your hard drive. If you do this, you’ll need to insert the removable media whenever you want to refresh or reset your PC. On older Windows 7 computers, you could delete the recovery partition using a partition manager — but ensure you have recovery media ready if you ever need to install Windows. If you prefer to install Windows from scratch instead of using your manufacturer’s recovery partition, you can just insert a standard Window disc if you ever want to reinstall Windows. Disable the Hibernation File Windows creates a hidden hibernation file at C:\hiberfil.sys. Whenever you hibernate the computer, Windows saves the contents of your RAM to the hibernation file and shuts down the computer. When it boots up again, it reads the contents of the file into memory and restores your computer to the state it was in. As this file needs to contain much of the contents of your RAM, it’s 75% of the size of your installed RAM. If you have 12 GB of memory, that means this file takes about 9 GB of space. On a laptop, you probably don’t want to disable hibernation. However, if you have a desktop with a small solid-state drive, you may want to disable hibernation to recover the space. When you disable hibernation, Windows will delete the hibernation file. You can’t move this file off the system drive, as it needs to be on C:\ so Windows can read it at boot. Note that this file and the paging file are marked as “protected operating system files” and aren’t visible by default. Shrink the Paging File The Windows paging file, also known as the page file, is a file Windows uses if your computer’s available RAM ever fills up. Windows will then “page out” data to disk, ensuring there’s always available memory for applications — even if there isn’t enough physical RAM. The paging file is located at C:\pagefile.sys by default. You can shrink it or disable it if you’re really crunched for space, but we don’t recommend disabling it as that can cause problems if your computer ever needs some paging space. On our computer with 12 GB of RAM, the paging file takes up 12 GB of hard drive space by default. If you have a lot of RAM, you can certainly decrease the size — we’d probably be fine with 2 GB or even less. However, this depends on the programs you use and how much memory they require. The paging file can also be moved to another drive — for example, you could move it from a small SSD to a slower, larger hard drive. It will be slower if Windows ever needs to use the paging file, but it won’t use important SSD space. Configure System Restore Windows seems to use about 10 GB of hard drive space for “System Protection” by default. This space is used for System Restore snapshots, allowing you to restore previous versions of system files if you ever run into a system problem. If you need to free up space, you could reduce the amount of space allocated to system restore or even disable it entirely. Of course, if you disable it entirely, you’ll be unable to use system restore if you ever need it. You’d have to reinstall Windows, perform a Refresh or Reset, or fix any problems manually. Tweak Your Windows Installer Disc Want to really start stripping down Windows, ripping out components that are installed by default? You can do this with a tool designed for modifying Windows installer discs, such as WinReducer for Windows 8 or RT Se7en Lite for Windows 7. These tools allow you to create a customized installation disc, slipstreaming in updates and configuring default options. You can also use them to remove components from the Windows disc, shrinking the size of the resulting Windows installation. This isn’t recommended as you could cause problems with your Windows installation by removing important features. But it’s certainly an option if you want to make Windows as tiny as possible. Most Windows users can benefit from removing Windows Update uninstallation files, so it’s good to see that Microsoft finally gave Windows 7 users the ability to quickly and easily erase these files. However, if you have more than enough hard drive space, you should probably leave well enough alone and let Windows manage the rest of these settings on its own. Image Credit: Yutaka Tsutano on Flickr     

    Read the article

  • BIND9 server not responding to external queries

    - by Twitchy
    I have set up a BIND server on my dedicated box which I want to host a nameserver for my domain on. When I use dig @202.169.196.59 nzserver.co.nz locally on the server I get the following response... ; <<>> DiG 9.8.1-P1 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43773 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; ANSWER SECTION: nzserver.co.nz. 3600 IN A 202.169.196.59 ;; AUTHORITY SECTION: nzserver.co.nz. 3600 IN NS ns2.nzserver.co.nz. nzserver.co.nz. 3600 IN NS ns1.nzserver.co.nz. ;; ADDITIONAL SECTION: ns1.nzserver.co.nz. 3600 IN A 202.169.196.59 ns2.nzserver.co.nz. 3600 IN A 202.169.196.59 ;; Query time: 0 msec ;; SERVER: 202.169.196.59#53(202.169.196.59) ;; WHEN: Sat Oct 27 15:40:45 2012 ;; MSG SIZE rcvd: 116 Which is good, and is the output I want. But when simply using dig nzserver.co.nz I get... ; <<>> DiG 9.8.1-P1 <<>> nzserver.co.nz ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16970 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; Query time: 308 msec ;; SERVER: 202.169.192.61#53(202.169.192.61) ;; WHEN: Sat Oct 27 17:09:12 2012 ;; MSG SIZE rcvd: 32 And if I use dig @202.169.196.59 nzserver.co.nz on another linux machine I get... ; <<>> DiG 9.7.3 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached Am I doing something wrong here? Port 53 is definitely open. /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 202.169.192.61; 202.169.206.10; }; listen-on { 202.169.196.59; }; }; /etc/bind/named.conf.local zone "nzserver.co.nz" { type master; file "/etc/bind/nzserver.co.nz.zone"; }; /etc/bind/nzserver.co.nz.zone ; BIND db file for nzserver.co.nz $ORIGIN nzserver.co.nz. @ IN SOA ns1.nzserver.co.nz. mr.steven.french.gmail.com. ( 2012102606 28800 7200 864000 3600 ) NS ns1.nzserver.co.nz. NS ns2.nzserver.co.nz. MX 10 mail.nzserver.co.nz. @ IN A 202.169.196.59 * IN A 202.169.196.59 ns1 IN A 202.169.196.59 ns2 IN A 202.169.196.59 www IN A 202.169.196.59 mail IN A 202.169.196.59

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • The clock hands of the buffer cache

    - by Tony Davis
    Over a leisurely beer at our local pub, the Waggon and Horses, Phil Factor was holding forth on the esoteric, but strangely poetic, language of SQL Server internals, riddled as it is with 'sleeping threads', 'stolen pages', and 'memory sweeps'. Generally, I remain immune to any twinge of interest in the bowels of SQL Server, reasoning that there are certain things that I don't and shouldn't need to know about SQL Server in order to use it successfully. Suddenly, however, my attention was grabbed by his mention of the 'clock hands of the buffer cache'. Back at the office, I succumbed to a moment of weakness and opened up Google. He wasn't lying. SQL Server maintains various memory buffers, or caches. For example, the plan cache stores recently-used execution plans. The data cache in the buffer pool stores frequently-used pages, ensuring that they may be read from memory rather than via expensive physical disk reads. These memory stores are classic LRU (Least Recently Updated) buffers, meaning that, for example, the least frequently used pages in the data cache become candidates for eviction (after first writing the page to disk if it has changed since being read into the cache). SQL Server clearly needs some mechanism to track which pages are candidates for being cleared out of a given cache, when it is getting too large, and it is this mechanism that is somewhat more labyrinthine than I previously imagined. Each page that is loaded into the cache has a counter, a miniature "wristwatch", which records how recently it was last used. This wristwatch gets reset to "present time", each time a page gets updated and then as the page 'ages' it clicks down towards zero, at which point the page can be removed from the cache. But what is SQL Server is suffering memory pressure and urgently needs to free up more space than is represented by zero-counter pages (or plans etc.)? This is where our 'clock hands' come in. Each cache has associated with it a "memory clock". Like most conventional clocks, it has two hands; one "external" clock hand, and one "internal". Slava Oks is very particular in stressing that these names have "nothing to do with the equivalent types of memory pressure". He's right, but the names do, in that peculiar Microsoft tradition, seem designed to confuse. The hands do relate to memory pressure; the cache "eviction policy" is determined by both global and local memory pressures on SQL Server. The "external" clock hand responds to global memory pressure, in other words pressure on SQL Server to reduce the size of its memory caches as a whole. Global memory pressure – which just to confuse things further seems sometimes to be referred to as physical memory pressure – can be either external (from the OS) or internal (from the process itself, e.g. due to limited virtual address space). The internal clock hand responds to local memory pressure, in other words the need to reduce the size of a single, specific cache. So, for example, if a particular cache, such as the plan cache, reaches a defined "pressure limit" the internal clock hand will start to turn and a memory sweep will be performed on that cache in order to remove plans from the memory store. During each sweep of the hands, the usage counter on the cache entry is reduced in value, effectively moving its "last used" time to further in the past (in effect, setting back the wrist watch on the page a couple of hours) and increasing the likelihood that it can be aged out of the cache. There is even a special Dynamic Management View, sys.dm_os_memory_cache_clock_hands, which allows you to interrogate the passage of the clock hands. Frequently turning hands equates to excessive memory pressure, which will lead to performance problems. Two hours later, I emerged from this rather frightening journey into the heart of SQL Server memory management, fascinated but still unsure if I'd learned anything that I'd put to any practical use. However, I certainly began to agree that there is something almost Tolkeinian in the language of the deep recesses of SQL Server. Cheers, Tony.

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • Whoosh: PASS Board Year 1, Q4

    - by Denise McInerney
    "Whoosh". That's the sound the last quarter of 2012 made as it rushed by. My first year on the PASS Board is complete, and the last three months of it were probably the busiest. PASS Summit 2012 Much of October was devoted to preparing for Summit. Every Board  member, HQ staffer and dozens of volunteers were busy in the run-up to our flagship event. It takes a lot of work to put on the Summit. The community meetings,  first-timers program, keynotes, sessions and that fabulous Community Appreciation party are the result of many hours of preparation. Virtual Chapters at the Summit With a lot of help from Karla Landrum, Michelle Nalliah, Lana Montgomery and others at HQ the VCs had a good presence at Summit. We started the week with a VC leaders meeting. I shared some information about the activities and growth during the first part of the year.   From January - September 2012: The number of VCs increased from 14 to 20 VC membership  grew from 55,200 to 80,100 Total attendance at VC meetings increased from 1,480 to 2,198 Been part of PASS Global Growth with language-based VC- including Chinese, Spanish and Portuguese. We also heard from some VC leaders and volunteers. Ryan Adams (Performance VC) shared his tips for successful marketing of VC events. Amy Lewis (Business Intelligence VC) described how the BI chapter has expanded to support PASS' global growth by finding volunteers to organize events at times that are convenient for people in Europe and Australia. Felipe Ferreira (Portuguese language VC) described the experience of building a user group first in Brazil, then expanding to work with Portuguese-speaking data professionals around the world. Virtual Chapter leaders and volunteers were in evidence throughout Summit, beginning with the Welcome Reception. For the past several years VCs have had an organized presence at this event, signing up new members and advertising their meetings. Many VC leaders also spent time at the Community Zone. This new addition to the Summit proved to be a vibrant spot were new members and volunteers could network with others and find out how to start a chapter or host a SQL Saturday. Women In Technology 2012 was the 10th WIT Luncheon to be held at Summit. I was honored to be asked to be on the panel to discuss the topic "Where Have We Been and Where are We Going?" The PASS community has come a long way in our understanding of issues facing women in tech and our support of women in the organization. It was great to hear from panelists Stefanie Higgins and Kevin Kline who were there at the beginning as well as Kendra Little and Jen Stirrup who are part of the progress being made by women in our community today. Bylaw Changes The Board spent a good deal of time in 2012 discussing how to move our global growth initiatives forward. An important component of this is a proposed change to how the Board is elected with some seats representing geographic regions. At the end of December we voted on these proposed bylaw changes which have been published for review. The member review and feedback is open until February 8. I encourage all members to review these changes and send any feedback to [email protected]  In addition to reading the bylaws, I recommend reading Bill Graziano's blog post on the subject. Business Analytics Conference At Summit we announced a new event: the PASS Business Analytics Conference. The inaugural event will be April 10-12, 2013 in Chicago. The world of data is changing rapidly. More and more businesses want to extract value and insight from their data. Data professionals who provide these insights or enable others to do so are in demand. The BA Conference offers expert content on predictive analytics, data exploration and visualization, content delivery strategies and more. By holding this new event PASS is participating in important discussions happening in our industry, offering our members more educational value and reaching out to data professionals who are not currently part of our organization. New Year, New Portfolio In addition to my work with the Virtual Chapters I am also now responsible for the 24 Hours of PASS portfolio. Since the first 24HOP of 2013 is scheduled for January 30 we started the transition of the portfolio work from Rob Farley to me right after Summit. Work immediately started to secure speakers for the January event. We have also been evaluating webinar platforms that can be used for 24HOP as well as the Virtual Chapters. Next Up 24 Hours of PASS: Business Analytics Edition will be held on January 30. I'll be there and will moderate one or two sessions. The 24HOP topics are a sneak peek into the type of content that will be offered at the Business Analytics Conference. I hope to see some of you there. The Virtual Chapters have hit the ground running in 2013; many of them have events scheduled. The Application Development VC is getting restarted  and a new Business Analytics VC will be starting soon. Check out the lineup and join the VCs that interest you. And watch the Events page and Connector for announcements of upcoming meetings. At the end of January I will be attending a Board meeting in Seattle, and February 23 I will be at SQL Saturday #177 in Silicon Valley.

    Read the article

  • Come meet our Interns in Dublin

    - by klaudia.drulis
    Oracle Worldwide Product Translation Group (WPTG) provides solutions for all Oracle product and Content translation requirements. WPTG is a global organisation with its headquarters in Ireland and employees in Oracle offices worldwide. WPTG offer expertise in fields such as process engineering, tools development, linguistic quality, terminology, global product release, financial and vendor management. WPTG provides translation solution for over 40 languages including Asia Pacific, European, American and Middle Eastern languages. WPTG first introduced an intern program over 10 years ago and it has become a key component of our teams structure. The majority of Interns are sourced from a Computer Science related course, these Interns joining the engineering team. Others are sourced from Business courses and work within the Business / Project management area. The intern program allows us to maintain ties with current course curriculum and brings fresh energy and perspective into our Organisation. Four of the full time staff working in Dublin today joined us originally as Interns and subsequently were offered permanent positions. Come Meet some of our 2010 Interns, Come and see what Darragh, Anthony, Caoimhe, James and Artemij thought about working within the WPTG at Oracle: Darragh “Oracle has been a fun, challenging work placement for me. From day one I was treated as a full member of staff, this was both comforting and a little bit scary. The responsibilities stack up but I found I was able to keep on top of everything and even make improvements to how we handle a few things thanks to a great team and a very supportive manager. There’s a very positive atmosphere in work that’s really conducive to getting a lot of work done. Ideas seem to be the central hub in my line of business so all of my ideas and innovations were greeted with enthusiasm. Oracle has given me a fantastic opportunity and I urge you to grab it with both hands, you’ll find that you’re with a set of like minded people from all works of life that make work both interesting and fun. Even when the pressure is on you know that you can always get help and advice from someone nearby. My last word of advice is don’t be afraid to stick your neck out, everyone here is willing to learn, try something new and innovate, your voice will be heard and who knows, you could end up having a large impact on Oracle and your career.” Anthony “I had a great experience working with Oracle, from day one I was treated like a full member of staff with responsibilities of my own. I found that the more I put into the work the more I got out from the experience. Volunteering and being willing to face challenges have made this a more exciting placement. I am given a lot of leeway to do my own projects and so I’ve found that I am really enjoying my time here.” Caoimhe “I am currently spending my year of placement within the Release Management Team in the WPTG. My main role is to handle the finance process of all translation projects under 100k which includes creating workspecs and PO's, sending out kits, dealing with vendor queries and handling the invoicing and payment part. I am really enjoying my time here at Oracle, everyone is very open and friendly and willing to help you out with any questions you may have. I would definitely be interested in returning to Oracle after I graduate!” James “I am currently on a 12 month placement with Oracle, working as part of the Worldwide Product Translation Group in the Business Management. The Business Management team provides a global view on WPTG’s vendor and business strategy and is an interface into WPTG for new business. The business management team work together to support the external translation partner network. My role is to support the Business Management team and also to work on various projects when the need arises. This involves working with translation vendors and working with other Oracle employees worldwide. I am really enjoying my time working for Oracle, at times it can be challenging bit also very rewarding. I would recommend any student wanting to undertake a placement year to apply to Oracle, I made some great friends and I will never forget my time in Dublin.” Artemij “From working within Oracle, I have truly understood what "career path" is, and what opportunities a large corporation like Oracle can offer. Without any illusions, the work itself is exciting, sometimes challenging, tests your ability to handle pressure, to make decisions and take responsibility, to learn quickly and cooperate efficiently in order to solve a problem. I have learned a lot about myself. What I am good at, where and what I can do better. My placement at Oracle has allowed me to get a clearer picture of what I want, and which door I am going to open after college. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com

    Read the article

  • Google Analytics - Unable to get GA Tracking

    - by Pure.Krome
    We've been using GA for a few years with no probs. About 2-3 weeks ago we tried to clean up some of our tracking and on one of our profiles, it's not working anymore (since oct 10.) First, some context then some GA Debugging code. 1. Context. We have the following setup: different root domains AND different sub-domains on one of the root domains. www.website.com www.website.com.au www.anotherWebsite.com foo.website.com baa.website.com So what we're doing is the following: each root domain and each sub-domain get their own tracking code. This way we can allow separate people (from outside our company) to access only their own data. Eg. a manager for foo.website.com can only see data related to that domain .. and see data on the other domains. Have a last account which is the SUM of all the domains. this is for us. so we can see total numbers. So to do this, we have two trackers that fire off, on the page. the individual accounts all work fine - they seem to be tracking data ok. the 'global' account is not working and this gives us the = Tracking Not Installed error. This has been going on since oct 10. So the wait 24/48/72 hours thing is waaaaay over. 2. GA Debug code. Installing GA Debug chrome extension gives the following output. I've tried to hide anything that could be considered secret. UA-XXXXX34-1 == Global account (which isn't working any more). UA-XXXXX34-11 == Specific account for www.website.com _gaq.push processing "_setAccount" for args: "[UA-XXXXX34-1]": ga_debug.js:18 _gaq.push processing "_setDomainName" for args: "[website.com]": ga_debug.js:18 _gaq.push processing "_setAllowLinker" for args: "[true]": ga_debug.js:18 _gaq.push processing "_trackPageview" for args: "[]": ga_debug.js:18 Track Pageview ga_debug.js:18 Tracking beacon sent! utmwv=--snipped-- Account ID : UA-XXXX234-1 Page Title : Some page title Host Name : www.website.com Page : / Referring URL : - Hit ID : 1923583969 Visitor ID : 785310647 Session Count : 51 Session Time - First : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Session Time - Last : Mon Oct 29 2012 11:41:46 GMT 1100 (AUS Eastern Summer Time) Session Time - Current : Mon Oct 29 2012 12:19:23 GMT 1100 (AUS Eastern Summer Time) Campaign Time : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Campaign Session : 1 Campaign Count : 1 Campaign Source : (direct) Campaign Medium : (none); Campaign Name : (direct) Language : en-gb Encoding : UTF-8 Flash Version : 11.4 r31 Java Enabled : true Screen Resolution : 1050x1680 Browser Size : 1033x861 Color Depth : 32-bit Ga.js Version : 5.3.7d Cachebuster : 1846514973 ga_debug.js:18 _gaq.push processing "_setAccount" for args: "[UA-XXXX234-11]": ga_debug.js:18 _gaq.push processing "_setDomainName" for args: "[website.com]": ga_debug.js:18 _gaq.push processing "_setAllowLinker" for args: "[true]": ga_debug.js:18 _gaq.push processing "_trackPageview" for args: "[]": ga_debug.js:18 Track Pageview ga_debug.js:18 Tracking beacon sent! utmwv=--snipped-- Account ID : UA-XXXX234-11 Page Title : SomePageTitle Host Name : www.website.com Page : / Referring URL : - Hit ID : 1923583969 Visitor ID : 785310647 Session Count : 51 Session Time - First : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Session Time - Last : Mon Oct 29 2012 11:41:46 GMT 1100 (AUS Eastern Summer Time) Session Time - Current : Mon Oct 29 2012 12:19:23 GMT 1100 (AUS Eastern Summer Time) Campaign Time : Thu Aug 23 2012 15:20:17 GMT 1000 (AUS Eastern Standard Time) Campaign Session : 1 Campaign Count : 1 Campaign Source : (direct) Campaign Medium : (none); Campaign Name : (direct) Language : en-gb Encoding : UTF-8 Flash Version : 11.4 r31 Java Enabled : true Screen Resolution : 1050x1680 Browser Size : 1033x861 Color Depth : 32-bit Ga.js Version : 5.3.7d Cachebuster : 1580443754 and this is the js code he have. BTW, it is inside a <head></head> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push( ['_setAccount', 'UA-XXXX234-1'], ['_setDomainName', 'website.com'], ['_setAllowLinker', true], ['_trackPageview'] ,['b._setAccount','UA-XXXX234-11'], ['b._setDomainName','website.com'], ['b._setAllowLinker',true], ['b._trackPageview'] ); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Finally, I've triple checked that the UA is the correct text. and yes, the global account is -1 and the specific domain is -11. Anyone have any suggestions to help?

    Read the article

  • SQLAuthority News – A Conversation with an Old Friend – Sri Sridharan

    - by pinaldave
    Sri Sridharan is my old friend and we often talk on GTalk. The subject varies from Life in India/USA, movies, musics, and of course SQL. We have our differences when we talk about food or movie but we always agree when we talk about SQL. Yesterday while chatting with him we talked about SQLPASS and the conversation lasted for a long time. Here is the conversation between us on GTalk. I have removed a few of the personal talks and formatted into paragraphs as GTalk often shows stuff out of formatting. Pinal: Sri, Congrats on running for the PASS BoD again. You were so close last year. What made you decide to run again this year? Sri: Thank you Pinal for your leadership in the PASS India Community and all the things you do out there. After coming so close last year, there was no doubt in my mind that I will run again. I was truly humbled by the support I got from the community. Growing up in India for over 25 years, you are brought up in a very competitive part of the world. Right from the pressure of staying in the top of the class from kindergarten to your graduation, the relentless push from your parents about studying and getting good grades (and nothing else matters), you land up essentially living in a pressure cooker. To survive that relentless pressure, you need to have a thick skin, ability to stand up for who you really are , what you want to accomplish and in the process stay true those values. I am striving for a greater cause, to make PASS an organization that can help people with their SQL Server careers, to make PASS relevant to its chapter members, to make PASS an organization that every SQL professional in the world wants to be connected with. Just because I did not get elected or appointed last year does not mean that these causes are not worth fighting. Giving up upon failing the first time is simply not in me. If I did that, what message would I send to those who voted for me? What message would I send to my kids? Pinal: As someone who has such strong roots in India, what can the Indian PASS Community expect from you? Sri: First of all, I think fostering a regional leadership is something PASS must encourage as part of its global growth plan. For PASS global being able to understand all the issues in a region of the world and make sound decisions will be a tough thing to do on a continuous basis. I expect people like you, chapter leaders, regional mentors, MVPs of the region start playing a bigger role in shaping the next generation of PASS. That is something I said in my campaign and I still stand by it. I would like to see growth in the number of chapters in India. The current count does not truly represent the full potential of that region. I was pretty thrilled to see the Bangalore SQLSaturday happen early this year. I would like to see more of SQLSaturday events, at least in the major metro cities. I know the issues in India are very different from the rest of the world. So the formula needs to be tweaked a little for it to work better in India. Once the SQLSaturday model is vetted out, maybe there could be enough justification to have SQLRally India. PASS needs to have a premier SQL event in that region. Going to USA or Europe for that matter is incredibly hard due to VISA issues etc. So this could be a case of where PASS comes closer to where the community is. Pinal: What portfolio would take on if you are elected to the PASS Board? Sri: There are some very strong folks on the PASS Board today. The President discusses the portfolios with the group and makes the final call on the portfolios. I am also a fan of having a team associated with the portfolios. In that case, one person is the primary for a portfolio but secondary on a couple of other portfolios. This way people on the board have a direct vested interest in a few portfolios. Personally, I know I would these portfolios good justice – Chapters, Global Growth and Events (SQLSat, SQLRally). I would try to see if we can get a director to focus on Volunteers.  To me that is very critical for growth in the international regions. Pinal: This is an interesting conversation with you Sri. I know you so long time but this is indeed inspiring to many. India is a big country and we appreciate your thoughts. Sri: Thank you very much for taking time to chat with me today. Cheers. There are pretty strong candidates for SQLPASS Board of Elections this year. I know all of them in person and honestly it is going to be extremely difficult to not to vote for anybody. I am indeed in a crunch right now how to pick one over another. Though the choice is difficult, I encourage you to vote for them. I am extremely confident that the new board of directors will all have the same goal – Better SQL Server Community. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • Thought Oracle Usability Advisory Board Was Stuffy? Wrong. Justification for Attending OUAB: ROI

    - by ultan o'broin
    Looking for reasons tell your boss why your organization needs to join the Oracle Usability Advisory Board or why you need approval to attend one of its meetings (see the requirements)? Try phrases such as "Continued Return on Investment (ROI)", "Increased Productivity" or "Happy Workers". With OUAB your participation is about realizing and sustaining ROI across the entire applications life-cycle from input to designs to implementation choices and integration, usage and performance and on measuring and improving the onboarding and support experience. If you think this is a boring meeting of middle-aged people sitting around moaning about customizing desktop forms and why the BlackBerry is here to stay, think again! How about this for a rich agenda, all designed to engage the audience in a thought-provoking and feedback-illiciting day of swirling interactions, contextual usage, global delivery, mobility, consumerizationm, gamification and tailoring your implementation to reflect real users doing real work in real environments.  Foldable, rollable ereader devices provide a newspaper-like UK for electronic news. Or a way to wrap silicon chips, perhaps. Explored at the OUAB Europe Meeting (photograph from Terrace Restaurant in TVP. Nom.) At the 7 December 2012 OUAB Europe meeting in Oracle Thames Valley Park, UK, Oracle partners and customers stepped up to the mic and PPT decks with a range of facts and examples to astound any UX conference C-level sceptic. Over the course of the day we covered much ground, but it was all related in a contextual, flexibile, simplication, engagement way aout delivering results for business: that means solving problems. This means being about the user and their tasks and how to make design and technology transforms work into a productive activity that users and bean counters will be excited by. The sessions really gelled for me: 1. Mobile design patterns and the powerful propositions for customers and partners offered by using the design guidance with Oracle ADF Mobile. Customers' and partners' developers existing ADF developers are now productive, efficient ADF Mobile developers applying proven UX guidance using ADF Mobile components and other Oracle Fusion Middleware in the development toolkit. You can find the Mobile UX Design Patterns and Guidance on Building Mobile Apps on OTN. 2. Oracle Voice and Apps. How this medium offers so much potentual in the enterprise and offers a window in Fusion Apps cloud webservices, Oracle RightNow NLP and Nuance technology. Exciting stuff, demoed live on a mobile phone. Stay tuned for more features and modalities and how you can tailor your own apps experience.  3. Oracle RightNow Natural Language Processing (NLP) Virtual Assistant technology (Ella): how contextual intervention and learning from users sessions delivers a great personalized UX for users interacting with Ella, a fifth generation VA to solve problems and seek knowledge. 4. BYOD Keynote: A balanced keynote address contrasting Fujitsu's explaining of the conceprt, challenges, and trends and setting the expectation that BYOD must be embraced in a flexible way,  with the resolute, crafted high security enterprise requirements that nuancing the BYOD concept and proposals with the realities of their world of water tight information and device sharing policies. Fascinating stuff, as well providing anecdotes to make us thing about out own DYOD Deployments. One size does not fit all. 5. Icon Cultural Surveys Results and Insights Arising: Ever wondered about the cultural appropriateness of icons used in software UIs and how these icons assessed for global use? Or considered that social media "Like" icons might be  unacceptable hand gestures in culture or enterprise? Or do the old world icons like Save floppy disk icons still find acceptable? Well the survey results told you. Challenges must be tested, over time, and context of use is critical now, including external factors such as the internet and social media adoption. Indeed the fears about global rejection of the face and hand icons was not borne out, and some of the more anachronistic icons (checkbooks, microphones, real-to-real tape decks, 3.5" floppies for "save") have become accepted metaphors for current actions. More importantly the findings brought into focus the reason for OUAB - engage with and illicit feedback though working groups before we build anything. 6. EReaders and Oracle iBook: What is the uptake and trends of ereaders? And how about a demo of an iBook with enterprise apps content?  Well received by the audience, the session included a live running poll of ereader usage. 7. Gamification Design Jam: Fun, hands on event for teams of Oracle staff, partners and customers, actually building gamified flows, a practice that can be applied right away by customers and partners.  8. UX Direct: A new offering of usability best practices, coming to an external website for you in 2013. FInd a real user, observe their tasks, design and approve, build and measure. Simple stuff to improve apps implications no end. 9. FUSE (an internal term only, basically Fusion Simplified Experience): demo of the new Face of Fusion Applications: inherently mobile, simple to use, social, personalizable and FAST, three great demos from the HCM, CRM and ICT world on how these UX designs can be used in different ways. So, a powerful breadth and depth of UX solutions and opporunities for customers and partners to engage with and explore how they can make their users happy and benefit their business reaping continued ROI from those apps investments. Find out more about the OUAB and how to get involved here ... 

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • Fluent NHibernate/SQL Server 2008 insert query problem

    - by Mark
    Hi all, I'm new to Fluent NHibernate and I'm running into a problem. I have a mapping defined as follows: public PersonMapping() { Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.FirstName).Not.Nullable().Length(50); Map(p => p.MiddleInitial).Nullable().Length(1); Map(p => p.LastName).Not.Nullable().Length(50); Map(p => p.Suffix).Nullable().Length(3); Map(p => p.SSN).Nullable().Length(11); Map(p => p.BirthDate).Nullable(); Map(p => p.CellPhone).Nullable().Length(12); Map(p => p.HomePhone).Nullable().Length(12); Map(p => p.WorkPhone).Nullable().Length(12); Map(p => p.OtherPhone).Nullable().Length(12); Map(p => p.EmailAddress).Nullable().Length(50); Map(p => p.DriversLicenseNumber).Nullable().Length(50); Component<Address>(p => p.CurrentAddress, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); Map(p => p.EyeColor).Nullable().Length(3); Map(p => p.HairColor).Nullable().Length(3); Map(p => p.Gender).Nullable().Length(1); Map(p => p.Height).Nullable(); Map(p => p.Weight).Nullable(); Map(p => p.Race).Nullable().Length(1); Map(p => p.SkinTone).Nullable().Length(3); HasMany(p => p.PriorAddresses).Cascade.All(); } public PreviousAddressMapping() { Table("PriorAddress"); Id(p => p.Id).GeneratedBy.HiLo("1000"); Map(p => p.EndEffectiveDate).Not.Nullable(); Component<Address>(p => p.Address, m => { m.Map(p => p.Line1, "Line1").Length(50); m.Map(p => p.Line2, "Line2").Length(50); m.Map(p => p.City, "City").Length(50); m.Map(p => p.State, "State").Length(50); m.Map(p => p.Zip, "Zip").Length(2); }); } My test is [Test] public void can_correctly_map_Person_with_Addresses() { var myPerson = new Person("Jane", "", "Doe"); var priorAddresses = new[] { new PreviousAddress(ObjectMother.GetAddress1(), DateTime.Parse("05/13/2010")), new PreviousAddress(ObjectMother.GetAddress2(), DateTime.Parse("05/20/2010")) }; new PersistenceSpecification<Person>(Session) .CheckProperty(c => c.FirstName, myPerson.FirstName) .CheckProperty(c => c.LastName, myPerson.LastName) .CheckProperty(c => c.MiddleInitial, myPerson.MiddleInitial) .CheckList(c => c.PriorAddresses, priorAddresses) .VerifyTheMappings(); } GetAddress1() (yeah, horrible name) has Line2 == null The tables seem to be created correctly in sql server 2008, but the test fails with a SQLException "String or binary data would be truncated." When I grab the sql statement in SQL Profiler, I get exec sp_executesql N'INSERT INTO PriorAddress (Line1, Line2, City, State, Zip, EndEffectiveDate, Id) VALUES (@p0, @p1, @p2, @p3, @p4, @p5, @p6)',N'@p0 nvarchar(18),@p1 nvarchar(4000),@p2 nvarchar(10),@p3 nvarchar(2),@p4 nvarchar(5),@p5 datetime,@p6 int',@p0=N'6789 Somewhere Rd.',@p1=NULL,@p2=N'Hot Coffee',@p3=N'MS',@p4=N'09876',@p5='2010-05-13 00:00:00',@p6=1001 Notice the @p1 parameter is being set to nvarchar(4000) and being passed a NULL value. Why is it setting the parameter to nvarchar(4000)? How can I fix it? Thanks!

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • Can't create admin user on Heroku

    - by Nick5a1
    I am new to rails and I have gone through Kevin Skoglund's Ruby on Rails 3 Essential Training course on Lynda.com. Through the course you set up a simple cms, which I did. It doesn't cover Git or deployment but I've pushed my simple cms to github (https://github.com/nick5a1/Simple_CMS) and deployed to Heroku (http://nkarrasch.herokuapp.com/). In order to deploy to Heroku I followed the Heroku setup guide (https://devcenter.heroku.com/articles/rails3) and switched my database from MySQL to PostgreSQL. As instructed I changed gen'mysql2' to gen 'sqlite3' in my Gemfile and ran bundle install before pushing. I then ran heroku run rake db:migrate. I'm running into 2 problems. When I try to log in (http://nkarrasch.herokuapp.com/access) I get an error "We're sorry, but something went wrong". I should instead be getting a flash message with invalid username/password combination. This is what I'm getting on my test environment on my local machine. Secondly, when I log into the Heroku console to create and create an admin user, when I try to save that user I get the following error: irb(main):004:0> user.save (1.2ms) BEGIN AdminUser Exists (1.9ms) SELECT 1 AS one FROM "admin_users" WHERE "admin_users"."username" = 'Nick5a1' LIMIT 1 (1.7ms) ROLLBACK => false Any advice on how to troubleshoot would be greatly appreciated :). Thanks very much, Nick EDIT: Here are my Heroku logs: 2012-06-27T20:36:44+00:00 heroku[slugc]: Slug compilation started 2012-06-27T20:37:34+00:00 heroku[api]: Add shared-database:5mb add-on by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v2 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Add RAILS_ENV, LANG, PATH, RACK_ENV, GEM_PATH config by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v3 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Release v4 created by [email protected] 2012-06-27T20:37:34+00:00 heroku[api]: Deploy 1d82839 by [email protected] 2012-06-27T20:37:35+00:00 heroku[slugc]: Slug compilation finished 2012-06-27T20:37:36+00:00 heroku[web.1]: Starting process with command `bundle exec rails server -p 45450` 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:40+00:00 app[web.1]: DEPRECATION WARNING: You have Rails 2.3-style plugins in vendor/plugins! Support for these plugins will be removed in Rails 4.0. Move them out and bundle them in your Gemfile, or fold them in to your app as lib/myplugin/* and config/initializers/myplugin.rb. See the release notes for more on this: http://weblog.rubyonrails.org/2012/1/4/rails-3-2-0-rc2-has-been-released. (called from <top (required)> at /app/config/environment.rb:5) 2012-06-27T20:37:44+00:00 app[web.1]: => Rails 3.2.6 application starting in production on http://0.0.0.0:45450 2012-06-27T20:37:44+00:00 app[web.1]: => Call with -d to detach 2012-06-27T20:37:44+00:00 app[web.1]: => Booting WEBrick 2012-06-27T20:37:44+00:00 app[web.1]: Connecting to database specified by DATABASE_URL 2012-06-27T20:37:44+00:00 app[web.1]: => Ctrl-C to shutdown server 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO WEBrick 1.3.1 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO ruby 1.9.2 (2011-07-09) [x86_64-linux] 2012-06-27T20:37:44+00:00 app[web.1]: [2012-06-27 20:37:44] INFO WEBrick::HTTPServer#start: pid=2 port=45450 2012-06-27T20:37:45+00:00 heroku[web.1]: State changed from starting to up 2012-06-27T20:39:44+00:00 heroku[run.1]: Awaiting client 2012-06-27T20:39:44+00:00 heroku[run.1]: Starting process with command `bundle exec rake db:migrate` 2012-06-27T20:39:44+00:00 heroku[run.1]: State changed from starting to up 2012-06-27T20:39:51+00:00 heroku[run.1]: Process exited with status 0 2012-06-27T20:39:51+00:00 heroku[run.1]: State changed from up to complete 2012-06-27T20:41:05+00:00 heroku[run.1]: Awaiting client 2012-06-27T20:41:05+00:00 heroku[run.1]: Starting process with command `bundle exec rails console` 2012-06-27T20:41:05+00:00 heroku[run.1]: State changed from starting to up 2012-06-27T20:46:09+00:00 heroku[run.1]: Process exited with status 0 2012-06-27T20:46:09+00:00 heroku[run.1]: State changed from up to complete

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >