Search Results

Search found 12558 results on 503 pages for 'publish location'.

Page 62/503 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How do I reset my pointer to a specific array location?

    - by ohtanya
    I am a brand new programming student, so please forgive my ignorance. My assignment states: Write a program that declares an array of 10 integers. Write a loop that accepts 10 values from the keyboard and write another loop that displays the 10 values. Do not use any subscripts within the two loops; use pointers only. Here is my code: #include "stdafx.h" #include <iostream> using namespace std; int main() { const int NUM = 10; int values[NUM]; int *p = &values[0]; int x; for(x = 0; x < NUM; ++x, ++p) { cout << "Enter a value: "; cin >> *p; } for(x = 0; x < NUM; ++x, ++p) { cout << *p << " "; } return 0; } I think I know where my problem is. After my first loop, my pointer is at values[10], but I need to get it back to values[0] to display them. How can I do that?

    Read the article

  • can't access nginx server from IP

    - by EquinoX
    So 2 days ago I can see that page where it saya "Welcome to nginx", however as of now when I tried to access it, it says 404 page not found... Why is this? Inside my sites-enabled folder I have a file named default and it has the following: # You may add here your # server { # ... # } # statements for each of your virtual hosts server { listen 80; server_name 127.0.0.1; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex on; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /var/www/nginx-default; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { #proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/nginx-default$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { #listen 8000; #listen somename:8080; #server_name somename alias another.alias; #location / { #root html; #index index.html index.htm; #} #} # HTTPS server # #server { #listen 443; #server_name localhost; #ssl on; #ssl_certificate cert.pem; #ssl_certificate_key cert.key; #ssl_session_timeout 5m; #ssl_protocols SSLv2 SSLv3 TLSv1; #ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; #ssl_prefer_server_ciphers on; #location / { #root html; #index index.html index.htm; #} #} Here's my nginx.conf file: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } # mail { # # See sample authentication script at: # # http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } # } What am I doing wrong here? I have other virtual host setup in the sites-enabled as well... UPDATE: The server_name directives are: -admin.api.frapi -api.frapi -default -example.com -php.example.com

    Read the article

  • website connection reset on first load

    - by Tar
    i'm using nginx with php-cgi. lately a problem has arose where if you don't view my site for a while, like 3-4 minutes, and then open it again, the first request you send will return connection reset by peer in the browser. if you refresh, operation is normal for all subsequent requests. this happens every time and it isn't just an isolated incident, it happens to everyone using my site. i've tried to restart nginx and php-cgi but to no avail. does anyone know what the problem could be? i can provide whatever information necessary. it's worth noting that there's nothing in error log besides that message about client closing the connection early. nginx.conf user nobody; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; error_page 404 /404.html; error_page 403 /403.html; error_page 444 /444.html; error_page 502 /502.html; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; large_client_header_buffers 8 8k; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; server_tokens off; gzip on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 64 8k; gzip_min_length 1024; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; } default.conf server { listen 80; server_name domain.com; error_log /var/log/nginx/error.log debug; access_log /var/log/nginx/access.log; location / { if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } if ($http_user_agent ~* Havij|hvj|acunetix|wget|HTtrack) { return 403; } root /home/admin06/public_html; autoindex off; index index.php; # Images and static content is treated different location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /home/admin06/public_html; } location /nginx_status { stub_status on; access_log off;] deny all; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.*)$; #try_files $uri =404; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/site/public_html$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 60; fastcgi_read_timeout 60; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } location ~ error_log { deny all; } location ~ access_log { deny all; } location ~ \.cgi { deny all; } location ~ \.db { deny all; } }

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • Running OWSM WLST commands - 11g

    - by Prakash Yamuna
    It has been sometime since I posted some materials. I hope to have more tips, best practices - but here is a quick thing that people seem to trip up on..."How to Run OWSM related WLST commands" The most common issue people seem to run into is trying to run the OWSM WLST commands from the wrong location. You need to run the WLST commands from "oracle_common/common/bin/wlst.sh" (ex: /home/Oracle/Middleware/oracle_common/common/bin/wlst.sh) This is documented in the OWSM doc (Security And Administrator's Guide) under the section "Accessing the Web Services Custom WLST Commands". Note: This location is different from the location from where you run the SOA WLST commands. If you try to run the OWSM WLST command from the wrong location - you will see errors like the following: "The MBean ,@ oracle.wsm:*,name=WSMDocumentManager,type=Repository was not found"

    Read the article

  • juju bootstrap fails with a local environment, why?

    - by Braiam
    Each time I try to bootstrap juju using a local enviroment it fails starting the juju-db-braiam-local script as follows: $ sudo juju --debug --verbose bootstrap 2013-10-20 02:28:53 INFO juju.provider.local environprovider.go:32 opening environment "local" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:210 found "10.0.3.1" as address for "lxcbr0" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:234 checking 10.0.3.1:8040 to see if machine agent running storage listener 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:237 nope, start some 2013-10-20 02:28:53 DEBUG juju.environs.tools storage.go:87 Uploading tools for [raring precise] 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:109 looking for: juju 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:150 checking: /usr/bin/jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:156 found existing jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:166 target: /tmp/juju-tools243949228/jujud 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:217 forcing version to 1.14.1.1 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"FORCE-VERSION", Mode:420, Uid:0, Gid:0, Size:8, ModTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"jujud", Mode:493, Uid:0, Gid:0, Size:19179512, ModTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:106 built 1.14.1.1-raring-amd64 (4196kB) 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-precise-amd64 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.boostrap bootstrap.go:57 bootstrapping environment "local" 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:395 create mongo journal dir: /home/braiam/.juju/local/db/journal 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:401 generate server cert 2013-10-20 02:28:55 INFO juju.provider.local environ.go:421 installing service juju-db-braiam-local to /etc/init 2013-10-20 02:28:56 ERROR juju.provider.local environ.go:423 could not install mongo service: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) 2013-10-20 02:28:56 ERROR juju supercommand.go:282 command failed: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) error: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) What is the reason for this error and how to solve it?

    Read the article

  • Refresh bounded taskflows across regions using InputParameters

    - by raghu.yadav
    Usecase1 : Selecting record from table in left region reflects dependent detail form of same table in right region using InputParameters Here is the example given by Andre Example Three important crux to be known from above example. 1) create primary key attribute in pagedef of the table in region1 2) add inputparameter name in taskflow inputparameters of region2 3) bind primary key attribute from page definition to above inputparameters in main page where above 2 regions dropped. UseCase2 : Selecting record from location table in left region reflects corresponding department records from department table in right regions. 1) create bind variable on location id in departmentVO. 2) create inputparameter say LocationParam, with type Number, value as #{pageFlowScope.LocationParam} 3) assign LocationId param from pagedef to LocationParam in taskflow2 4) create ExecuteWithParam action in region2 pagedef and invoke the same on IfRefresh condition. during run time - steps executes in backwards (3,2,1)..i,e as user selects column in location table, it assigns location from pagedef to locationParam and then to PageFlowScope and from there to view criteria.

    Read the article

  • Scid Chess Program Compiling issue

    - by lbochitt
    First of all I would like to point that I'm new to Ubuntu so sorry if what I am asking is ridiculous. I have downloaded Scid 4.4 chess program and I have tried to compile it as it was explained on its website: 1) Initialize git. 2) Create a folder where you want to download and (?) compile the source, then cast: git init on your command line. 3) You're now ready to download the sources recall Fulvio's spell: git clone git://scid.git.sourceforge.net/gitroot/scid/scid This should get you the latest Scid source. 4) You're now ready to compile Scid. In principle, all you need to do is: ./configure and then make 5) If you get stuck, you probably need to get developer versions of tcl/tk. This translates into issuing these three commands: sudo apt-get install tcl8.5-dev sudo apt-get install tk8.5-dev sudo apt-get install zlib1g -dev 6) You should now be ready to compile The problem is that when I run ./configure to start compiling the following message appears on Terminal: configure: Makefile configuration program for Scid Tcl/Tk version: 8.5 Your operating system is: Linux 3.8.0-19-generic Location of "tcl.h": /usr/include/tcl8.5 Location of "tk.h": /usr/include/tcl8.5 Location of Tcl 8.5 library: not found Location of Tk 8.5 library: not found Checking if your system already has zlib installed: yes. Using Makefile.conf. Not all settings could be determined! The default Makefile was written. You will need to edit it before you can compile Scid. What should I do? Has anybody faced this problem before? Thanks in advance I have run ls -l /usr/include/tcl8.5/tcl.h here's the result: -rw-r--r-- 1 root root 87291 abr 22 10:45 /usr/include/tcl8.5/tcl.h I have also tried what you suggested Could you run git reset --hard HEAD and git clean -d -f to clean up everything using Git? Then run ./configure again. Just a shot in the dark - I've seen some GNU automake stuff still listening to its "cached" version of the results or something. Still no solution. I don't know why it can't recognize the library though it is installed I opened configure to see where the program looked for the library. This is the code: # libraryPath: List of possible locations of Tcl/Tk library. set libraryPath { /usr/lib /usr/lib64 /usr/local/tcl/lib /usr/local/lib /usr/X11R6/lib /opt/tcltk/lib /usr/lib/x86_64-linux-gnu } lappend libraryPath "/usr/lib/tcl${tclv}" lappend libraryPath "/usr/lib/tk${tclv}" lappend libraryPath "/usr/lib/tcl${tclv_nodot}" lappend libraryPath "/usr/lib/tk${tclv_nodot}" # Try to add tcl_library and auto_path values to libraryPath, # in case the user has a non-standard Tcl/Tk library location: if {[info exists ::tcl_library]} { lappend headerPath \ [file join [file dirname [file dirname $::tcl_library]] include] lappend libraryPath [file dirname $::tcl_library] lappend libraryPath $::tcl_library } if {[info exists ::auto_path]} { foreach name $::auto_path { lappend libraryPath $name } } if {! [info exists var(TCL_INCLUDE)]} { puts -nonewline { Location of "tcl.h": } set opt(tcl_h) [findDir "tcl.h" $headerPath "TCL_VERSION.*$tclv"] if {$opt(tcl_h) == ""} { puts "not found" set success 0 set opt(tcl_h) "$::defaultVar(TCL_INCLUDE)" } else { puts $opt(tcl_h) } } set opt(tcl_lib) "" if {! [info exists var(TCL_LIBRARY)]} { puts -nonewline " Location of Tcl $tclv library: " set opt(tcl_lib) [findDir "libtcl${tclv}.*" $libraryPath] if {$opt(tcl_lib) == ""} { set opt(tcl_lib) [findDir "libtcl${tclv_nodot}.*" $libraryPath] if {$opt(tcl_lib) == ""} { puts "not found" set success 0 set opt(tcl_lib) "$opt(tcl_h)" set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" } else { set opt(tcl_lib_file) "tcl${tclv_nodot}" puts $opt(tcl_lib) } } else { set opt(tcl_lib_file) "tcl\$(TCL_VERSION)" puts $opt(tcl_lib) } } if {! [info exists var(TCL_INCLUDE)]} { set var(TCL_INCLUDE) "-I$opt(tcl_h)" } if {! [info exists var(TCL_LIBRARY)]} { set var(TCL_LIBRARY) "-L$opt(tcl_lib) -l$opt(tcl_lib_file)" } return $success So I guess (And by guess I mean i have no idea how to code) I should write somewhere in here usr/lib/tcl8.5 and usr/lib/tk8.5, am I right?

    Read the article

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • Apache2 "pseudo" doc root

    - by Brent
    I have several folders in my /www folder that contain various applications. To keep things organized, I keep them in their own folders -- this includes my base application. Examples: phpmyadmin = /www/phpmyadmin phpvirtualbox = /www/phpvirtualbox root domain site = /www/Landing The reason I segregate all of my sites is that I actively develop on some of these (my root site) and when I publish via Visual Studio, I choose to delete prior to upload - if I put the Landing page in the base folder, it would be devastating for me. My goal is that when I go to www.example.com - I go to my page. If I go to www.example.com/phpmyadmin, it does not work because of this in the Apache2 folder: <Location "/"> # Error is the "/" Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> If I change the location to say "/Other", then the base site is broken, and the aliases are restored for the other sites. If it is "/", then the base site works and no aliases work. What could I do to allow it to treat my /www/Landing as my webroot, but when I go to an alias, it GOES to the alias. Edit: Added in the default VirtualHost info. DocumentRoot /var/www <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ExpiresActive On ExpiresByType image/gif A2592000 ExpiresByType image/png A2592000 ExpiresByType image/jpg A2592000 ExpiresByType image/jpeg A2592000 ExpiresByType text/css "access plus 1 days" MonoServerPath domain "/usr/bin/mod-mono-server4" MonoDebug domain true MonoSetEnv domain MONO_IOMAP=all MonoApplications domain "/:/var/www/Landing" RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule (.*) /Landing/$1 [L] #Need to watch what the Location is set to. Can cause issues for alias <Location "/"> Allow from all Order allow,deny MonoSetServerAlias domain SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • Is data integrity possible without normalization?

    - by shuniar
    I am working on an application that requires the storage of location information such as city, state, zip code, latitude, and longitude. I would like to ensure: Location data is accurate Detroit, CA Detroit IS NOT in California Detroit, MI Detroit IS in Michigan Cities and states are spelled correctly California not Calefornia Detroit not Detriot Cities and states are named consistently Valid: CA Detroit Invalid: Cali california DET d-town The D Also, since city/zip data is not guaranteed to be static, updating this data in a normalized fashion could be difficult, whereas it could be implemented as a de facto location if it is denormalized. A couple thoughts that come to mind: A collection of reference tables that store a list of all states and the most common cities and zip codes that can grow over time. It would search the database for an exact or similar match and recommend corrections. Use some sort of service to validate the location data before it is stored in the database. Is it possible to fulfill these requirements without normalization, and if so, should I denormalize this data?

    Read the article

  • Oracle Linux Training Across Five Continents

    - by Antoinette O'Sullivan
    The Oracle Linux System Administration course, a top selling course, provides you with a broad selection of key competencies you need to be a great Linux system administrator. And you can now take this course from your desk or in classrooms across all five contents. You can take this 5-day instructor-led course through the follow delivery methods: Training-on-Demand: Start training within 24 hours of registering. You following lecture material at your own pace via streaming video and book time on a lab environment to suit your schedule. Live-Virtual Event: Follow a live event from your own desk, no travel required. You can choose from a selection of events on the schedule to suit a different time zones. In-Class Event: Travel to an education center to take this course. Below is a selection of the in-class events already on the schedule. AFRICA  Location  Date  Delivery Language  Nairobi, Kenya  13 October 2014  English  Johannesburg, South Africa  24 November 2014  English AMERICA  Location  Date  Delivery Language  Mississauga, Canada  27 October 2014  English  Chicago, IL, United States  13 October 2014  English  Roseville, MN, United States  13 October 2014  English ASIA  Location  Date  Delivery  Jakarta, Indonesia  20 October 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Kuala Lumpur, Malaysia  8 December 2014  English  Istanbul, Turkey  10 November 2014  Turkish   Dubai, United Arab Emirates  4 January 2015  English AUSTRALIA  Location  Date  Delivery Language  Canberra, Australia  20 October 2014  English  Melbourne, Australia  20 October 2014  English EUROPE  Location  Date  Delivery Language  Paris, France  6 October 2014  French  Milan, Italy  20 October 2014  Italian  Rome, Italy  8 September 2014  Italian  Bucharest, Romania  27 October 2014  Romanian  Madrid, Spain  1 September 2014  Spanish The Oracle Linux System Administration course is the recommended training course to prepare for you for the Oracle Linux 5 & 6 System Administrator OCA certification exam. Those who have acquired the skills provided in the Oracle Linux System Administration course, can advance their learning by taking the Oracle Linux Advanced Administration course. You can take this 5-day instructor led course as a live-virtual event or an in-class event. Below is a selection of the in-class events on the schedule:  Location  Date  Delivery Language  Jakarta, Indonesia  27 October 2014  English  Kuala Lumpur, Malaysia  6 October 2014  English  Bangkok, Thailand  20 October 2014  English  Belmont, CA, United States  15 September 2014  English For information on the Oracle Linux curriculum, go to http://oracle.com/education/linux.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

  • Gamemaker: Making a bullet Spawn at the enemy it was called from

    - by Strokes
    I'm making a gamemaker game with gml. In this game I have multiple enemies (same object) on screen at the same time. I want them to all spawn a bullet at their location. But instead each enemy spawns a bullet at one single enemy. They all shoot but the bullets appear in the wrong location. I want the bullet to spawn at the location of the instance is was called for. How do I do this? Thank you for reading my question. Code: obj_carrier is the enemy I want to spawn from. obj_carrier_bullet is the bullet I want to spawn at location of the carrier There are multiple carriers around the stage. In the step event of the carrier following an if statement: instance_create(obj_carrier.x, obj_carrier.y, obj_carrier_bullet)

    Read the article

  • Clean way to use mutable implementation of Immutable interfaces for encapsulation

    - by dsollen
    My code is working on some compost relationship which creates a tree structure, class A has many children of type B, which has many children of type C etc. The lowest level class, call it bar, also points to a connected bar class. This effectively makes nearly every object in my domain inter-connected. Immutable objects would be problematic due to the expense of rebuilding almost all of my domain to make a single change to one class. I chose to go with an interface approach. Every object has an Immutable interface which only publishes the getter methods. I have controller objects which constructs the domain objects and thus has reference to the full objects, thus capable of calling the setter methods; but only ever publishes the immutable interface. Any change requested will go through the controller. So something like this: public interface ImmutableFoo{ public Bar getBar(); public Location getLocation(); } public class Foo implements ImmutableFoo{ private Bar bar; private Location location; @Override public Bar getBar(){ return Bar; } public void setBar(Bar bar){ this.bar=bar; } @Override public Location getLocation(){ return Location; } } public class Controller{ Private Map<Location, Foo> fooMap; public ImmutableFoo addBar(Bar bar){ Foo foo=fooMap.get(bar.getLocation()); if(foo!=null) foo.addBar(bar); return foo; } } I felt the basic approach seems sensible, however, when I speak to others they always seem to have trouble envisioning what I'm describing, which leaves me concerned that I may have a larger design issue then I'm aware of. Is it problematic to have domain objects so tightly coupled, or to use the quasi-mutable approach to modifying them? Assuming that the design approach itself isn't inherently flawed the particular discussion which left me wondering about my approach had to do with the presence of business logic in the domain objects. Currently I have my setter methods in the mutable objects do error checking and all other logic required to verify and make a change to the object. It was suggested that this should be pulled out into a service class, which applies all the business logic, to simplify my domain objects. I understand the advantage in mocking/testing and general separation of logic into two classes. However, with a service method/object It seems I loose some of the advantage of polymorphism, I can't override a base class to add in new error checking or business logic. It seems, if my polymorphic classes were complicated enough, I would end up with a service method that has to check a dozen flags to decide what error checking and business logic applies. So, for example, if I wanted to have a childFoo which also had a size field which should be compared to bar before adding par my current approach would look something like this. public class Foo implements ImmutableFoo{ public void addBar(Bar bar){ if(!getLocation().equals(bar.getLocation()) throw new LocationException(); this.bar=bar; } } public interface ImmutableChildFoo extends ImmutableFoo{ public int getSize(); } public ChildFoo extends Foo implements ImmutableChildFoo{ private int size; @Override public int getSize(){ return size; } @Override public void addBar(Bar bar){ if(getSize()<bar.getSize()){ throw new LocationException(); super.addBar(bar); } My colleague was suggesting instead having a service object that looks something like this (over simplified, the 'service' object would likely be more complex). public interface ImmutableFoo{ ///original interface, presumably used in other methods public Location getLocation(); public boolean isChildFoo(); } public interface ImmutableSizedFoo implements ImmutableFoo{ public int getSize(); } public class Foo implements ImmutableSizedFoo{ public Bar bar; @Override public void addBar(Bar bar){ this.bar=bar; } @Override public int getSize(){ //default size if no size is known return 0; } @Override public boolean isChildFoo return false; } } public ChildFoo extends Foo{ private int size; @Override public int getSize(){ return size; } @Override public boolean isChildFoo(); return true; } } public class Controller{ Private Map<Location, Foo> fooMap; public ImmutableSizedFoo addBar(Bar bar){ Foo foo=fooMap.get(bar.getLocation()); service.addBarToFoo(foo, bar); returned foo; } public class Service{ public static void addBarToFoo(Foo foo, Bar bar){ if(foo==null) return; if(!foo.getLocation().equals(bar.getLocation())) throw new LocationException(); if(foo.isChildFoo() && foo.getSize()<bar.getSize()) throw new LocationException(); foo.setBar(bar); } } } Is the recommended approach of using services and inversion of control inherently superior, or superior in certain cases, to overriding methods directly? If so is there a good way to go with the service approach while not loosing the power of polymorphism to override some of the behavior?

    Read the article

  • Cancelling Route Navigation in AngularJS Controllers

    - by dwahlin
    If you’re new to AngularJS check out my AngularJS in 60-ish Minutes video tutorial or download the free eBook. Also check out The AngularJS Magazine for up-to-date information on using AngularJS to build Single Page Applications (SPAs). Routing provides a nice way to associate views with controllers in AngularJS using a minimal amount of code. While a user is normally able to navigate directly to a specific route, there may be times when a user triggers a route change before they’ve finalized an important action such as saving data. In these types of situations you may want to cancel the route navigation and ask the user if they’d like to finish what they were doing so that their data isn’t lost. In this post I’ll talk about a technique that can be used to accomplish this type of routing task.   The $locationChangeStart Event When route navigation occurs in an AngularJS application a few events are raised. One is named $locationChangeStart and the other is named $routeChangeStart (there are other events as well). At the current time (version 1.2) the $routeChangeStart doesn’t provide a way to cancel route navigation, however, the $locationChangeStart event can be used to cancel navigation. If you dig into the AngularJS core script you’ll find the following code that shows how the $locationChangeStart event is raised as the $browser object’s onUrlChange() function is invoked:   $browser.onUrlChange(function (newUrl) { if ($location.absUrl() != newUrl) { if ($rootScope.$broadcast('$locationChangeStart', newUrl, $location.absUrl()).defaultPrevented) { $browser.url($location.absUrl()); return; } $rootScope.$evalAsync(function () { var oldUrl = $location.absUrl(); $location.$$parse(newUrl); afterLocationChange(oldUrl); }); if (!$rootScope.$$phase) $rootScope.$digest(); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The key part of the code is the call to $broadcast. This call broadcasts the $locationChangeStart event to all child scopes so that they can be notified before a location change is made. To handle the $locationChangeStart event you can use the $rootScope.on() function. For this example I’ve added a call to $on() into a function that is called immediately after the controller is invoked:   function init() { //initialize data here.. //Make sure they're warned if they made a change but didn't save it //Call to $on returns a "deregistration" function that can be called to //remove the listener (see routeChange() for an example of using it) onRouteChangeOff = $rootScope.$on('$locationChangeStart', routeChange); } This code listens for the $locationChangeStart event and calls routeChange() when it occurs. The value returned from calling $on is a “deregistration” function that can be called to detach from the event. In this case the deregistration function is named onRouteChangeOff (it’s accessible throughout the controller). You’ll see how the onRouteChangeOff function is used in just a moment.   Cancelling Route Navigation The routeChange() callback triggered by the $locationChangeStart event displays a modal dialog similar to the following to prompt the user:     Here’s the code for routeChange(): function routeChange(event, newUrl) { //Navigate to newUrl if the form isn't dirty if (!$scope.editForm.$dirty) return; var modalOptions = { closeButtonText: 'Cancel', actionButtonText: 'Ignore Changes', headerText: 'Unsaved Changes', bodyText: 'You have unsaved changes. Leave the page?' }; modalService.showModal({}, modalOptions).then(function (result) { if (result === 'ok') { onRouteChangeOff(); //Stop listening for location changes $location.path(newUrl); //Go to page they're interested in } }); //prevent navigation by default since we'll handle it //once the user selects a dialog option event.preventDefault(); return; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Looking at the parameters of routeChange() you can see that it accepts an event object and the new route that the user is trying to navigate to. The event object is used to prevent navigation since we need to prompt the user before leaving the current view. Notice the call to event.preventDefault() at the end of the function. The modal dialog is shown by calling modalService.showModal() (see my previous post for more information about the custom modalService that acts as a wrapper around Angular UI Bootstrap’s $modal service). If the user selects “Ignore Changes” then their changes will be discarded and the application will navigate to the route they intended to go to originally. This is done by first detaching from the $locationChangeStart event by calling onRouteChangeOff() (recall that this is the function returned from the call to $on()) so that we don’t get stuck in a never ending cycle where the dialog continues to display when they click the “Ignore Changes” button. A call is then made to $location.path(newUrl) to handle navigating to the target view. If the user cancels the operation they’ll stay on the current view. Conclusion The key to canceling routes is understanding how to work with the $locationChangeStart event and cancelling it so that route navigation doesn’t occur. I’m hoping that in the future the same type of task can be done using the $routeChangeStart event but for now this code gets the job done. You can see this code in action in the Customer Manager application available on Github (specifically the customerEdit view). Learn more about the application here.

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

  • Drag and drop fearture for a website

    - by gpuguy
    I have to design a website which will have drag and drop features for creating an e-card. So you select items from a tool box and drag and drop this item on the card area. Once you have completed the design you can publish the e-card on the web by clicking "Save and publish" button. What are the possible technologies for implementing this feature? The requirement is that the application should not degrade the performance of the website, and should not take much time in publishing once the user click "Save and publish" button.

    Read the article

  • Does using structure data semantic LocalBusiness schema markup work for local EMD URL's?

    - by ElHaix
    Based on what I have read about Google's recent Panda and Penguin updates, I'm getting the impression that using semantic markup may help improve SEO results. On a EMD (exact match domain) site, that may have been hit, we list location-based products. We are now going to be adding a itemtype="http://schema.org/Product" to each product, with relevant details. However, that product may be available in Los Angeles and also in appear in a Seattle results page. We could add a LocalBusiness item type on each geo page to define the geo location for that page. While the definition states: A particular physical business or branch of an organization. Examples of LocalBusiness include a restaurant, a particular branch of a restaurant chain, a branch of a bank, a medical practice, a club, a bowling alley, etc. We could add use the location property which would simply include the city/state details. I realize that this looks like it is meant for a physical location, however could this be done without seeming black-hat?

    Read the article

  • Bouncing ball slowing down over time

    - by user46610
    I use the unreal engine 4 to bounce a ball off of walls in a 2D space, but over time the ball gets slower and slower. Movement happens in the tick function of the ball FVector location = GetActorLocation(); location.X += this->Velocity.X * DeltaSeconds; location.Y += this->Velocity.Y * DeltaSeconds; SetActorLocation(location, true); When a wall gets hit I get a Hit Event with the normal of the collision. This is how I calculate the new velocity of the ball: FVector2D V = this->Velocity; FVector2D N = FVector2D(HitNormal.X, HitNormal.Y); FVector2D newVelocity = -2 * (V.X * N.X + V.Y * N.Y) * N + V; this->Velocity = newVelocity; Over time, the more the ball bounced around, the velocity gets smaller and smaller. How do I prevent speed loss when bouncing off walls like that? It's supposed to be a perfect bounce without friction or anything.

    Read the article

  • Subversion LDAP Configuration

    - by dbyrne
    I am configuring a subversion repository to use basic LDAP authentication. I have an entry in my http.conf file that looks like this: <Location /company/some/location> DAV svn SVNPath /repository/some/location AuthType Basic AuthName LDAP AuthBasicProvider ldap Require valid-user AuthLDAPBindDN "cn=SubversionAdmin,ou=admins,o=company.com" AuthLDAPBindPassword "XXXXXXX" AuthLDAPURL "ldap://company.com/ou=people,o=company.com?personid" </Location> This works fine for living, breathing people who need to log in. However, I also need to provide application accounts access to the repository. These accounts are in a different OU. Do I need to add a whole new <location> element, or can I add a second AuthLDAPURLto the existing entry?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >