Timestamp converter.
Unix / epoch timestamps ↔ human-readable dates, both directions, with timezone-aware output and a live clock. Seconds or milliseconds, ISO 8601 / RFC 2822 / relative formats. Browser-only.
no signup · nothing uploads
· enter a seconds timestamp above
· enter a date above (ISO 8601, RFC 2822, or any format the browser recognizes)
· runs in your browser via native Date + Intl APIs · nothing uploads
What a Unix timestamp actually is
A single integer: the number of seconds that have elapsed since 00:00:00 UTC on January 1, 1970 (the "Unix epoch"). That's it — no timezone, no format, no ambiguity. 0 is the epoch itself. 1735689600 is the start of 2025. 1776883423 is somewhere near the moment this page was rendered.
Because the timestamp is an integer, it's trivial to store (one number), compare (larger = later), sort (integer ordering = chronological ordering), and transmit across systems that might disagree about timezones. That's why it's the default internal representation for dates in Unix, Linux, macOS, PostgreSQL, every JavaScript runtime, and countless APIs.
Seconds vs milliseconds (the gotcha)
Most Unix tools use seconds. JavaScript's Date.now() returns milliseconds. Some JSON APIs mix the two depending on who implemented them. The shortcut to tell them apart: a seconds timestamp for "now" is ~1,700,000,000 (10 digits). A milliseconds timestamp is ~1,700,000,000,000 (13 digits). If you're off by a factor of 1000, you probably mixed the two.
Converting in popular languages
// JavaScript const seconds = Math.floor(Date.now() / 1000); const date = new Date(seconds * 1000); // to Date const iso = date.toISOString(); // "2026-04-22T..." // Python import time, datetime seconds = int(time.time()) dt = datetime.datetime.fromtimestamp(seconds) # local dt_utc = datetime.datetime.utcfromtimestamp(seconds) // Bash (Linux / GNU coreutils) date +%s # current epoch date -d @1735689600 # to readable date -d "2026-04-22 12:00:00" +%s # to epoch // macOS / BSD bash date +%s date -r 1735689600 # to readable // PostgreSQL SELECT EXTRACT(EPOCH FROM NOW())::int; SELECT to_timestamp(1735689600); // MySQL SELECT UNIX_TIMESTAMP(); SELECT FROM_UNIXTIME(1735689600);
The Year 2038 problem
A 32-bit signed integer can hold values up to 2,147,483,647. That number of seconds after January 1, 1970 is 2038-01-19 03:14:07 UTC. After that moment, 32-bit Unix systems will wrap to negative and think it's 1901. The fix is 64-bit time_t, which most modern systems (macOS, Linux on 64-bit, Windows) already use. Legacy embedded devices and old databases are the remaining risk — 13 years from now, expect a wave of quiet failures in the least-maintained corners of the software world.
FAQ
What is a Unix timestamp?
A Unix timestamp (also called epoch time or POSIX time) is the number of seconds elapsed since January 1, 1970 at 00:00:00 UTC. It's the default way computers store dates internally — a single integer that unambiguously identifies a moment in time, independent of timezone. 1735689600, for example, is midnight UTC on New Year's Day 2025. Most programming languages return it via functions like Date.now() / 1000 (JavaScript), time.time() (Python), or time() (C).
What's the difference between seconds and milliseconds?
Seconds is the original Unix convention and what most databases, APIs, and Unix command-line tools use. Milliseconds (seconds × 1000) is used by JavaScript's Date.now(), some JSON APIs, and anywhere sub-second precision matters. A 10-digit number is seconds (through year 2286); 13 digits is milliseconds. When in doubt, check the length — the tool auto-detects based on your unit toggle but won't catch every mislabeled value.
How do timezones work with Unix timestamps?
The timestamp itself has no timezone — it's always UTC seconds-since-1970. Timezones come in only when you convert to a human-readable date. The tool shows the same moment in multiple zones: UTC (universal), your local zone (auto-detected), and whichever zone you pick from the dropdown. When you convert a date BACK to a timestamp, the tool uses the timezone embedded in your input (explicit Z, +00:00, or similar) or assumes local if none is given.
Why does my date input produce an unexpected timestamp?
Usually a timezone interpretation mismatch. The string '2026-04-22 12:00:00' (no timezone) is interpreted as local time by the Date constructor — so the resulting timestamp depends on who's viewing. If you want a specific timezone, be explicit: '2026-04-22T12:00:00Z' for UTC, '2026-04-22T12:00:00-07:00' for PDT, etc. When sharing timestamp-generating code across timezones, always use ISO 8601 with explicit offset.
What's the Year 2038 problem?
32-bit signed Unix timestamps overflow on January 19, 2038 at 03:14:07 UTC — the maximum 32-bit signed integer is 2,147,483,647, which is that moment in seconds-since-1970. After that, systems using 32-bit signed ints will wrap to negative numbers and think it's 1901. Modern systems use 64-bit integers (good until year 292 billion) but embedded devices, old databases, and legacy code may still be affected. Fix is to upgrade to 64-bit time_t or to use milliseconds + 64-bit which most modern systems already do.
How do I convert a Unix timestamp in JavaScript / Python / etc.?
JavaScript: `new Date(ts * 1000)` if ts is seconds, or `new Date(ts)` if milliseconds. The Date constructor takes milliseconds. Python: `datetime.datetime.fromtimestamp(ts)` (local) or `datetime.datetime.utcfromtimestamp(ts)` (UTC). Bash: `date -d @1735689600` (GNU) or `date -r 1735689600` (BSD/macOS). SQL (PostgreSQL): `to_timestamp(1735689600)`. SQL (MySQL): `FROM_UNIXTIME(1735689600)`.
Why 1970 as the epoch?
Unix was developed at Bell Labs in 1969-1971. The original implementations used 1971 as the epoch, but settled on January 1, 1970 because it was a clean date just before the OS started being used commercially, and it gave them plenty of 'headroom' with 32-bit seconds. Other operating systems chose differently: Windows NT uses January 1, 1601; Apple's HFS uses January 1, 1904; .NET uses January 1, 0001. Unix won because of POSIX standardization and Linux's dominance.
Can I convert negative timestamps?
Yes — negative Unix timestamps represent dates before 1970. -86400 is December 31, 1969 00:00:00 UTC. Most modern Date libraries handle them correctly. Databases and older programming languages may not; check your specific stack if you're working with pre-1970 dates.
Is anything I enter uploaded or logged?
No. All conversions use the browser's native Date and Intl APIs. No network requests, no server, no analytics event includes your timestamps. Your last-used unit, timezone, and input values are stored in localStorage for convenience — that data stays in your browser.
Working with data? You might also want the base64 encoder, the JSON ↔ CSV converter, or the regex tester. See the full dev hub.